Friday, January 14, 2011

Share sheetfed scanner over network

Greetings, I have a problem with sharing of sheetfed scanner (Panasonic KV-S...). How can I network this type of USB device between multiple computers in our LAN? Our supplier recommends software tool USB to Ethernet Connector . Does anybody have any experience with this tool? Or maybe any other solution? Thanks...

  • If you are able to connect it to a Linux box and if your scanner is supported, Sane (http://www.sane-project.org/) has pretty good support for sharing scanners on a network. There is a windows client available, but I dont know if there is any server support ...

    Good luck !

    From Guillaume
  • I can recommend USBtoEthernet Connector, you can try its demo from product page, check wiki or contact developer support team if you have any questions…

    ** My usage scenario was in accessing USB under Hyper-V session and they even developed custom solution (secure backup over net) for my client.

    Hope this helps)

Low-cost ISP failover for inbound traffic?

We host one web server on our office internet connection (cheap!). The DNS servers are external and not provided by the ISP.

When the connection goes down we would like to have a backup solution. The basic idea is to get a second internet connection with a different ISP (separate last-mile), and a different IP number.

How would one go about minimizing the downtime for the users of our web site? How far would we get by setting the DNS TTL to perhaps a couple of minutes, and then be ready to switch over to the backup IP number when problems occur (automatically or manually)?

  • You need say what your equipment is. And there are multiple parts to this, do want just your ISP or your routers as well. Also, even if you have different ISPs the 'last mile' might be the same. There is also how you are access the servers, for instance, if it were by IP, the other ISP would give you a different IP. So you need to have failover at the DNS level as well.

    If you want to fail over to another connection on the same router, with Cisco you would just set an adminsitrative distance on a second default route that is a higher number that the AD on the primary default route.

    Really, you need to give more details of what you are trying to achieve with what equipment. The answers to this question might get you started.

  • A nice option would be if you use an PI IP Range for example a /26 and split that one into two /27 pieces.

    Let each /27 half route from one ISP and mirror the other half to each ISP too with a higher metric.

    So in case one of the ISP goes down, the other ISP will still route the whole PI IP Range.

    The only unlikely thing is, that you have two gateway and just one default route on your router/firewall devices behind. That means you have to configure backup route or ask your ISP's about possibilities to run HSRP..

    Another way would be to use your own bgp devices

    Alnitak : A PI space any smaller than a /24 won't work on the internet - most global IP networks will ignore route advertisements that small.
    From sam
  • There are several problems to overcome for this to work:

    1. IP ranges - typically you'll get different IP addresses from each ISP. When you fall over you need inbound connections to arrive at the second set of IPs. For greatest resilience obtain a /24 "provider independent" IP block (or larger) and arrange for your (expensive) router to speak BGP4 with your ISPs.

    2. DNS entries - unless you have your own range of IPs (see #1 above) you need to have your DNS entries change on the fly. However many (broken) clients will ignore any TTLs that you publish and will continue trying to access the old IP range. The consensus view amongst DNS experts is that DNS is not the right way to achieve redundancy.

    3. Outbound traffic - your servers need to know which internet connection to send the return packets out of. This is potentially easier if you have both connections coming into a single router / firewall, but that then becomes a single point of failure too.

    From Alnitak
  • If you are a small to medium sized business, check out this product. http://www.ecessa.com/pages/products/products%5Fpowerlink%5Fpl50.php

    Arne Evertsson : Please tell me about that product and similar products right here.
  • If you already have the multiple providers, you just need a failover method. Most easily you'll want to do as described, and use DNS changes to go to the active IP address.

    I've had good success with PepLink, particularly the 20W, which is relatively inexpensive.

    The BGP route as noted in another answer is more complicated (and expensive) and requires your upstream providers allow BGP advertisements, which many last mile providers do not do.

    From ctennis
  • Peplink Balance 210 or above has a built-in DNS server. It allows you to load balance AND fail-over inbound traffic automatically. TTL value is up to you. My preferred value is 360 seconds.

    To feel how it works, try to add a domain and create an A record in their demo site.

lsass.exe error, Windows cannot boot.

This is apocalypse. The server threw me an "lsass.exe" error this morning, saying that it cannot boot, with the following error.

LSASS.EXE - System Error, security accounts manager initialization failed because of the following error: Directory Services cannot start. Error status 0xc00002e1.

I don't get to boot screen.

I can successfully boot in active directory restore mode.

I'm beyond horror and panic at the moment. The system told me the user hive was corrupted, but recuperation worked out okay, or so said the messagebox.

As far as I know, there is no disaster recovery plan at all. The boss said that there MIGHT be a ghost somewhere. If I don't find any, there isn't.

The question is simple. I have to improvise the best plan ever or we're all dead. What should I do, apart from trying not to panic?

The system is a Windows 2003 with SiS onboard RAID support, plugged with two scsi drives in RAID 0+1.
The drivers and system are up to date.
There is seemingly no virus in there, though I wouldn't rule out that possibility.
Security is a mess to start with.

This is a follow-up to my epic odyssey of tragic death:
http://serverfault.com/questions/52312/write-read-errors-raid1-recovery
http://serverfault.com/questions/49424/0x00000077-error-on-the-corporate-server
http://serverfault.com/questions/53349/windows-server-2003-sisraid-error-device-scsi-sisraid1

  • Here is a Microsoft KB reference to start with,
    "Directory Services cannot start" error message when you start your Windows-based or SBS-based domain controller.

    Have not looked deeply in your other questions, and, i do not see a reference here suggesting you have done a Microsoft KB lookup.

    MrZombie : OKay, so my Active Directory database is corrupted. This is bad?
    nik : @MrZombie, did you look at the `Resolution` steps on that link? Do you have any thing to add here so that people can try helping you? Or, are you just waiting for a decision to restart with fresh data (and forget all old data).
    MrZombie : This was selected as a good answer, because it SHOULD have led me to the right recovery. But, all those steps failed and in the end, my boss decided that it was a good idea to review everything IT-based. Thanks, folks!
    From nik
  • I have never experienced that particular error before, but I dont think its panic time. Is the Event Viewer available in restore mode? If so check it out, maybe it will give you some idea where to start.

    If not, I have used the ERD Commander boot disk many times on our Win2000 AD Server. It will allow you to boot from the ERD CD and 'attach' a Windows installation.

    Once booted, you have a windows-like desktop and can do many helpful tasks, such as view event viewer, browse the drives, anything really.

    Good luck. EDIT...from: http://windows.ittoolbox.com/groups/technical-functional/windows2000-l/lsassexe-system-error-directory-services-649051

    'This issue can occur if the path to the NTDS folder that holds theActive Directory database files and log files does not exist or the NTFS permissions on this folder and database files are too restrictive, and Active Directory cannot start. See Q258007 and Q295932 for more details. Also check Event id 26 from source Program Popup.'

    From cop1152
  • I had a very similar error on a winXP machine with a dying drive. There were bad blocks randomly appearing here and there, destroying important system files... What did I do? I used SpinRite to recover the bad blocks, then I booted from SystemRescue CD to restore the missing dll from another machine.

    Then I changed the hard drive for a better one :)

    MrZombie : I like the solution, but SpinRite isn't free, there is no other machine to restore dlls from. Oh, and no spare parts. -_-
    wazoox : Well SpinRite isn't free but it's cheap enough. For a professional setup it's a life saver and while there are completely free alternatives to ghost, partition magic, etc. I know no free alternative to spinrite.
    From wazoox
  • http://support.microsoft.com/kb/830574

    You receive a "lsass.exe-system error: Security Accounts Manager initialization failed" error message and event ID 1168 is logged when you restart a Windows Server 2003 domain controller

    From Kev

Web Site Anti-Defacement

what are the best practices, policies, tools/utilities for monitoring & barring website defacement

  • This is an extremely open ended question, and the best answer is really "It depends." What you do to protect your site depends on a lot of factors.

    Are you on a shared hosting plan, VPS, or dedicated host? If you're on a VPS or dedicated host you're responsible for that machine's security - meaning configuring a firewall, host based IDS, locking down any open ports and using strong authentication, keeping your patches up to date, etc. If you're on a shared host - do they have a good record for security?

    Do you have a brochure-ware static HTML site that's only updated through FTP transfers of the latest revs of the files? Then you need to stop using FTP, use SFTP/SCP and key-based authentication and disable password authentication.

    Are you hosting a site that is more dynamic and allows user content, like a blog, wiki or forum? Then you have a lot more to be concerned about - picking a software package that has a good record for security, keeping it up to date when patches are released, and following guides for configuring it securely. Rename the administrator account and use strong passwords to start.

    You really haven't provided enough information for someone to give you details on how to specifically help you, though.

    From cji
  • Don't run a webserver.

    Seriously, though? Use the latest up-to-date software that has at least been run though an external code auditing review, keep it up-to-date as new releases are released. Don't use unauthorized, un-audited third party software for above software. If you run php/perl/python/ruby run it through the same process as above. Static pages can't be exploited, but the server still can. If you have remote access to this server, limit it as best as possible via firewall rules. You could also setup with a company that does remote website scrapping and compares your current page to last known good one. There are several free and several commercial.

    You have to understand that there is no such thing as security, just like in real life. Everything breaks, everything can be exploited, an attacker just has to be smarter than the code author(s). The idea is to layer your approaches that it makes the attack infeasible to the attacker without outspending what the data is worth.

  • Like the cji has said, there are so many variables that it is hard to know what level you are speaking about.

    If you only have access to the actual site files, then it is important to keep it up to date with the latest patches and keep an eye out for any security updates. You can also run tests using Nessus or another website scanner to look for the most common vulnerabilities. This can also be outsourced but gets expensive fast. Depending on the plugin you pick, you can check for certain vulnerabilities or even weak passwords.

    As far as "monitoring" goes, you can use a service like ChangeDetection which will show any changes to the site. If you are expecting it to be static and it changes all of a sudden, this could be a sign of a hack. If your site changes a lot because it is a news site or similar situation, this method does not work very well.

    If you do have access to compiling apache, I highly recommend mod_security. It will run anything that is POSTed or GETted (is that a word?) to Apache to check for hacks. It has saved my butt a few times on applications that we need to run but are not necessarily secure.

    Finally, if you are really serious about web app security, you need to hire a firm to take care of this. However, if you are doing this on a personal project or for a low-profile, no budget site the above steps should help you start out. There are whole careers based on your topic, so any response to your answer is not going to be a definitive solution.

  • Some time ago for a small static website we used a fully readonly filesystem: CD-ROM. It was highly cached by the kernel, so the speed was sufficient.

    warren : that is definitely a different approach! I like the alternative thinking :)
    From liori

Exchange 2003 server won't auto-forward outside of the domain. Where to look?

I'm not an Exchange buff: rule-based autoforwarding works internally within our organisation but auto-forwarding to off-side addresses doesn't.

I work at a small school (400 students, 150 machines) with limited access to our network from outside. Students all have internal e-mail addresses which are often used by faculty staff. However, checking their mail from outside the school is often difficult and just adds an extra address. A number have set up auto-forwarding rules but none work. No error reports are generated either (that I can see).

Any ideas where to start looking would be appreciated.

Edit: as suggested in the title, internal auto-forwarding does work.

How to get Business Intelligence Development Studio?

Hi,

I need Business Intelligence Development Studio (BIDS) installed on my workstation (Win XP/x86). I don't need the SQL server itself since I will be developing and deploying against another server.

I installed SQL Server 2005 Express Advanced Services with the reporting services component enabled, but this didn't give me BIDS it appears.

Is there any standalone installation for BIDS or do I have to install the full SQL Server 2005 to get BIDS? I'm pretty sure I've read that SSRS is 'free' since it's included in the express edition, but does this not include the development environment?

  • You can install SQL Server 2005 with client tools without installing the database engine, or other SQL Server services. Do you have access to a enterprise/standard/development dvd/cd of sql server 2005?

    If not the free/express version of BIDS is available for download from Microsoft. I do not know if it has a full feature set compared to the full version, anyone care to comment?

  • When you install SQL Server 2005, it can flake out and not install BIDS if you're not careful. You need to make sure the part 'Workstation components' (IIRC) is selected for installation.

  • If not the free/express version of BIDS is available for download from Microsoft. I do not know if it has a full feature set compared to the full version, anyone care to comment?

    No, this does not have the full feature set compared to the full version. I'm not sure about the differences on the Reporting Services side, but the ability to use it with Analysis Services is unavailable in the free/express version.

    -rp

SQL ASP State - Perfmon for Active User Sessions

Hi guys,

we have an IIS web farm (two servers at the moment) set up to use SQL ASP State for the Session info. When I query the ASP State database table I can see the number of sessions being managed by the database.

Is there an associated Performance monitor that I can use to get the same information or is the best way to do it via a SQL Query?

I've tried setting up perfmon on the Web servers and the SQL Server and monitored everything that even sounds like Session, including:

\\WebServerA\ASP.NET Apps v2.0.50727(__Total__)\Session SQL Server connections total
\\WebServerA\ASP.NET Apps v2.0.50727(__Total__)\Sessions Active
\\WebServerB\ASP.NET Apps v2.0.50727(__Total__)\Session SQL Server connections total
\\WebServerB\ASP.NET Apps v2.0.50727(__Total__)\Sessions Active
\\WebServerB\ASP.NET State Service\State Server Sessions Active
\\WebServerB\ASP.NET v2.0.50727\State Server Sessions Active
\\SQLServer\ASP.NET Apps v2.0.50727(__Total__)\Session SQL Server connections total
\\SQLServer\ASP.NET Apps v2.0.50727(__Total__)\Sessions Active
\\SQLServer\ASP.NET State Service\State Server Sessions Active

All counters remain on 0 while running a performance test simulating 50 users, while at the same time I can see 50 sessions being created by querying the SQL session table.

Edit: The "Session SQL Server connections total" counters on the Web Servers do go up when the test is running, but it doesn't actually track the number of users (obviously, I know, but I was hoping for something)

  • Your best bet will be to query the SQL Server as this will be the most reliable information as the database is the authoritative source of this information at this point.

    Gineer : Will that not affect the performance of the ASP State Database? Since this is the central state store for all servers in the farm, it obvioulsy needs to be fast. Are you telling me that there simply is no performance counter for this or were you just giving your opinion?
    mrdenny : SQL Won't provide a counter, and that would be the only server that has all the information about all the web servers in the farm. It shouldn't slow down the database as the ASPState database is very small, and it should all be sitting in the SQL Server Buffer Cache.
    From mrdenny
  • I've done a simlar search on http://msdn.microsoft.com/newsgroups and found the same answer there: How to count the active ASP.NET Sessions (SQL Server)?

    It Seems that querying SQL is the only way to get this information. I suppose the only other way to do this would be to create a custom performance counter. I figure if that was a good idea, Microsoft would have simply created the counter?!? No?

    From Gineer

Measuring performance on virtual machines

Here is the scenario:

We have two virtual machines running on the same machine using hyper-v, one is a database server and the other is a web server.

I am analysing performance information based on web requests. Each request to the web server also results in requests to services hosted on the same virtual machine as well as calls to the database server.

The information is taken from PsList which is running on both virtual machines and from JMeter which performs the requests. The information includes memory usage and cpu usage on both virtual machines over time as well as the time taken per request. I may be wrong but pslist

My first (noob) question is how to interpret the cpu usage of each virtual machine (given as a percentage). Is this the percentage of cpu that has been allocated to that virtual machine that is used, or is it the percentage of cpu usage on the actual machine on which the virtual machine is running? In other words, would you expect the total cpu usage of processes (including idle processes) on both virtual machines to total 100 or 200?

My second question is whether there is a better way to measure the performance of both virtual machines that could show the resources that are being used by each and by the host machine itself?

Many thanks, Nigel.

Setuid not working on Solaris

I have a Perl script marked setuid, but when I run it, it says I don't have permission to do so. I am running Solaris 10. This works on another system but I can't tell whats different. What am I doing wrong?

$ ls -l
total 16
-r-sr-x---   1 root     root        7354 Apr 19  2008 myscript
$ ./myscript
./myscript: Permission denied.
  • Hmm answers to this question suggest that on more modern systems I can only setuid on programs, not on shell scripts. Probably the other system is actually a binary,

    TRS-80 : Perl isn't a shell script, and has its own mechanisms for running suid safely. In this case the first problem is permissions as mpdc says.
  • I have to ask....The program is owned by root with group root. The user running the program is apparently not root (no # as the command prompt), but is the user in group "root"?

    The quick fix would seem to be for this specific case:

     chmod o+rx myscript
    
    David Pashley : +1 this does look like a likely reason.
    From mdpc
  • While I suspect mdpc's answer is the correct one and that you need to change permissions for "other", there is a handy technique you can use for making scripts run as other users. What you need to do is create a very simple C program that takes argv[0], appends something like ".real" and then execs that string. You then move your script from foo.pl to foo.pl.real and move your compiled binary to foo.pl and setuid that binary. Now when you run foo.pl, you'll be running foo.pl.real as the user you want.

    As with any setuid program, you want to make sure that you're not causing a security problem. You should sanitise argv[0] to make sure that it's the program you think you should be running or there's a chance of someone symlinking to the binary and getting permissions they shouldn't.

  • mdpc's answer is most likely correct, but note that perl runs differently when it is run setuid.

    amongst other things, it automatically turns on perl's taint mode to force you to sanitise your input and args before using them. it is also very fussy about PATH and other environment variables that can be abused to compromise a system.

    see perlsec(1) for more details (Note: on some systems, including debian, the perl docs are available as man pages. on other systems, almost certainly including Solaris 10, you'll have to run "perldoc perlsec" rather than "man perlsec").

  • Run the groups command to list the groups that you are a member of. You must be a member of the root group on the system that you're trying to run myscript.

    Check the mount options for the filesystem that the script resides on. There is a nosuid option that can be used to allow or disallow setuid or setgid execution.

    mount|grep rchuck
    /home/rchuck on homedir.mydomain.com:/export/home4/03/rchuck remote/read/write/nosetuid/nodevices/intr/retrans=10/retry=3/xattr/dev=59ca539 on Wed Jul 22 07:41:23 2009

    From Randall

How to get e-mail from (failed) cron-jobs in Ubuntu?

Hi,

I create cron-jobs in Ubuntu by placing the executable in one of /etc/cron.{daily,hourly,monthly,weekly}. There are lots of directories starting with cron:

kent@rat:~$ ls -ld /etc/cron*
drwxr-xr-x 2 root root 4096 2009-06-06 18:52 /etc/cron.d
drwxr-xr-x 2 root root 4096 2009-07-16 13:17 /etc/cron.daily
drwxr-xr-x 2 root root 4096 2009-06-06 18:52 /etc/cron.hourly
drwxr-xr-x 2 root root 4096 2009-06-06 18:52 /etc/cron.monthly
-rw-r--r-- 1 root root  724 2009-05-16 23:49 /etc/crontab
drwxr-xr-x 2 root root 4096 2009-06-06 18:52 /etc/cron.weekly

I would like to get e-mail from my scripts when:

  1. A script fails and gives an exit code of non-zero.
  2. The script has something to tell me

I have SSMTP installed and working, I send my mail from my Google-account. The fact that SSMTP can only send mail using one account isn't a problem for me. It's just a home server and the users I have do not have the ability to add cron-jobs.

I would like to know how the mailing from scripts usually works in Linux/Unix in general and in Ubuntu specifically. I would also like to know of a good way for me to get mails in the two situations above.

  • try adding "root: your@email.address" to /etc/aliases

    that will send all messages for that user to your email. if you don't want all messages, you could create a user specifically for this.

    As long as the script outputs something, you will get a mail.

    From Daniel P
  • If you want to send all output (stdout and stderr) to a specific address then you can use the MAILTO variable. For example, place the following at the top of the script.

    MAILTO="address@example.com"
    
    From Dan Carley
  • By default, cron will email the owner of the account under which the crontab is running.

    The system-wide crontab is in /etc/crontab runs under the user `root'

    Because root is used widely, I'd recommend adding a root alias to your /etc/aliases file anyways. (run 'newaliases' after)

    The normal way to structure this is for root to be aliased to another user on the system, e.g. for me I'd alias 'root' to 'phil' (my user account) and alias 'phil' to my external email address.

    If you have a specific user cron that you'd like emailed to you on output, you can use /etc/aliases again (providing you have superuser access) to redirect the user to another email address, or you can use the following at the top of your crontab:

    MAILTO="email@domain.com"
    

    If you need more information see crontab(5) by running:

    man 5 crontab
    
    From Phil
  • I don't think SSMTP is up to what you need it to do. You need something that can "receive" mail from the cron processes and then send it out to your real mailbox.

    I use Sendmail, but that's because I'm an old Sun hand; I know it gets laughed at by all the cool kids these days who use Postfix. Your ubuntu community can guide you with setting up your mail system.

  • In order to get email sent from vixie cron you will need something that replicates the sendmail command. So installing postfix or SSMTP will sort this part out. If your using postfix then the aliases file can be used to map system users to real email addresses.

    Adding MAILTO="foo@bar.com" to the top of a crontab will cause any output from the cron job to be emailed. This is regardless of error code.

    For scripts that output errors correctly into STDERR then its easy to get emailed only when they go wrong just do this:

    MAILTO="foo@bar.com"
    0 5 * * * /bin/some_script > /dev/null
    

    This will redirect just the STDOUT to null. If any STDERR messages are present they will get email to you.

    However, I've found some scripts will output errors incorrectly as STDOUT and set the exit code to 1. I have not figure out a way to grab the output from these, but ignore the output if the exit code is 0. The only method I can think of is to redirect the output to a file, then if the exit code is not 0 output that file for cron to grab. Seems pretty horrible though.

    From

How to use noatime with smbfs

I am using the mount command on a Linux server to access a Windows server using smbfs. Can I use noatime to prevent read operations (such as cp on Linux) from changing the last-accessed time on files on the Windows server?

If so how can I do this?

  • I think you should be able to. According to man mount, no atime falls under "FILESYSTEM INDEPENDENT MOUNT OPTIONS". Does the following work?

    mount -t cifs \\server\share /mnt/smount -o username=DOMAIN\administrator,noatime

    Update:

    Looks like the above does not quite cut it. It maybe stops Linux VFS from updating but not windows. However, the above in combination with changing the Windows registry not to update the access time on NTFS might do the trick:

    System Key: [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem]
    Value Name: NtfsDisableLastAccessUpdate
    Data Type: REG_DWORD (DWORD Value)
    Value Data: (0 = disable, 1 = enable)
    

    This still might not effect shares though.

    Liam : No, maybe it cannot be controlled from Linux.
    Craig Sanders : @liam: the noatime option will prevent Linux from updating the atime. whether the Windows server will update its equivalent to atime anyway is another story entirely.
    Liam : Is there a Windows equivalent to atime/noatime?

Transfer files without destination access.

HI All,

I have a issue where I want to create a script (VB or Batch file) that when a user runs the script it will copy files from Folder1 to Folder2.

Here's the rub.

I don't want the users to have access to Folder2, I need them to run the scipt so they can't move files in manually and it does a bit of logging as well.

I have two ideas but don't know how feasible they are:

  1. The user calls the script but it runs under different permissions. How would I do this without the user seeing the account details.

  2. The user runs a scipt that runs a scheduled job on the server. The scheduled job would then run under different privileges but the users don't have access to the server so there may be an issue running a scheduled task.

Any other ideas would be grateful.

THanks in advance

JoeOD

  • Folder2 may be shared with deny access to users, and the script mounts the share with custom credentials move files than disconnect the share.

    There a trick for hiding network mapped drives on windows with the NoDrives dword at HKCU\Software\Microsoft\Windows\ CurrentVersion\Policies\Explorer.

    PS: You can compile your .bat script with bat2exe to avoir users seeing the file content. AutoIT script may do the job also.

    EDIT: See this article for configuring the NoDrives Dword Value.

    From Maxwell

Scripting MS SQL Server 2000/2008 Roles

Hi,

I'm currently migrating a MS SQL Server from 2000 to 2008. I really want to migrate all of the roles (including all members and permissions of that role) from the 2000 box by scripting them.

Then, on the 2008 box i want to edit them and again script them to move them to the live server.

It seems that scripting the role simply allows you to recreate the role and does not include any details of the members and permissions. Can this be done? Is it as easy as selecting script role in Management Studio or must I write the script myself, if so, do you have any pointers (which tables to use etc)?

Thanks,

oookiezooo

  • You will not need to script out roles/ permissions if you're upgrading the databases (i.e. either by a backup/restore or a detach/attach). You will only need to script logins out (which are at the database server level, not the database level). For that, there's the tool sp_help_revlogin.

    If you are re-creating the db in 2008 from scratch then there are some ready made scripts that come with FineBuild that will let you script roles & permissions out from a db. Then you can modify these in t-sql as you see fit before apply to the new 2008 db.

    Let me know if you need clarification.

Windows Server 2008 vs. 2003: Different behaviour with multiple IP addresses

We have an IIS server to which we assign multiple IP-Adresses.

In windows server 2003, windows used the ip-adress in the main dialog for outgoing connections . If I assign the ip 192.168.1.4 in the main dialog and the following additional ips 192.168.1.3,5,6 in the detailed dialog. Windows server 2003 uses .4 as ip for the request to our sql server.

In windows server 2008, I observed that windows uses the lowest ip address 192.168.1.3 for the connection to our sql-server, despite 192.168.1.4 entered in the main dialog.

Has anyone else encountered this behaviour?

  • It does appear to be doing that. In fact it doesn't appear to matter which NIC the connection comes from, the lowest IP is being used.

    I've got two NICs in my web servers.

    One machine is 10.3.16.4 on the management NIC.

    The nic the load ballancer points to have 10.3.16.42, 45, 125, 126, 127 and 128 assigned. All connections to the SQL Server from that server are from 10.3.16.4.

    (2 NICs are used so that I can remove the disable the second NIC and do what ever I need to on the host without it effecting the load ballancer.)

    From mrdenny
  • I'm not sure if you're looking for a way to change this behavior or not, but you can do so by changing the binding order. It's not too difficult to find documentation detailing this procedure for Windows XP and Server 2003, but I don't see any official docs on Windows Server 2008. I'm guessing it's the same or similar.

    From pk

phpMyAdmin - can’t connect - invalid setings - ever since I added a root password - locked out

I run XAMPP, a few days back i had set up a password for the root password through phpmyadmin I am not able to access phpMyAdmin ever since that moment

I followed help on this link but everything seems fine there (in config.inc.php). I even tried unistalling xampp fully, restarting windows and then reinstalling xampp, but still pointing to localhost/phpmyadmin I get the following error

MySQL said: 
Cannot connect: invalid settings. 
phpMyAdmin tried to connect to the MySQL server, and the server rejected the
connection. You should check the host, username and password in your
configuration and make sure that they correspond to the information given
by the administrator of the MySQL server.

I Also tried to reset root password through mysqld.bat as given on mysql's website help but to no avail

Please Help!

  • You can reset the Mysql password trought the cli by running it with the --skip-grant-tables option, next you'll be able to log in with the root user without a password and issue

    UPDATE user SET Password=PASSWORD('newpwd') WHERE User='root';
    flush privileges;
    exit;
    

    Next, restart MySQL normally.

    Hope this helps.

    From Maxwell
  • Edit the config.inc.php file in the phpmyadmin directory and change line 73: $cfg['Servers'][$i]['password'] = ''; to $cfg['Servers'][$i]['password'] = 'yourPassHere';

    Save, and access phpmyadmin again. This will avoid you from having to change the password as stated by Maxwell.

    From Egdelwonk

Method of automatically transfering log files from multiple servers

Once a day, I want to run AWStats on webserver log files generated by multiple load balanced servers. I want an efficient way of transfering them to one place. Is there already a tool that can do this?

Otherwise, I was thinking of using a cron job to grep for the current day, then tar and gzip the files before sending them over so I can merge and analyze them. Is this a good approach or can you suggest a better approach?

Thanks!

  • Have you considered using NFS on the servers to mount a directory from another server?

    From mdpc
  • Just rsync the logs over to your analysis machine, saves a hell of a lot of unnecessary logic.

    From rodjek
  • Use rsync

    I'd use rsync in a cron job. Fast, reliable, simple.

    From KPWINC
  • In AWstats tools you can find a perl script (logresolvemerge.pl) for merge log files in case of load balancing. This script can help you AFTER the copy (rsync is good choice - see answeres of jodiek and KPWINC) of web server's logs.

    From lg
  • There are tools that are meant to be a Central Log Management System for Linux, that doesn't require them to be copied. At least, not in the sense you are talking about, you can just set up NFS mounts or install clients on the machines.

    An easy one with a nice web interface is Splunk, it is free for up to 500MB a day of indexing without authentication for the web interface.

    The classic more manual method is syslog-ng which might already be on your system. Here is a tutorial on setting up a central log server with that.

What is the best firewall/iptables management tool for multiple servers?

We are setting up iptables for each server we run, is there a nagios kind of tool that will allow us to see and manage from a central console without requiring us to get in each server and setup each and every iptables ?

If there is an open source firewall that does this I'd be glad to know. (we don't want to use webmin)

  • We're looking at doing this using Puppet - there's a module for iptables configuration

    From Whisk
  • My recent question on large-scale firewalling produced fwbuilder as being a possible contender -- apparently (I haven't evaluated it yet) it allows you to describe all of your firewalls in one place and then have them applied where they're needed. Could be worth a look.

    From womble

Graphing/Reporting PHP Errors

Whats the best way to get reports on php errors?

To give a bit of background, have some legacy PHP applications/websites that generate various errors/warnings etc, currently going to apache log.

Would like to be able to graph these somehow to have showing on a screen in the office the developers are in.

The hope is that increasing the visibility of these errors will firstly make people aware quickly if an upgrade increases the error rate, and also help in the longer term quest to drive the errors down towards zero.

For extra points, it would be nice to be able split the errors up by a part of the path to the file causing the error (ie: split the errors by site).

Whats the best tool for this? I was looking at cacti, and have used zenoss for other monitoring before. But can't find info on doing exactly this with either - hoping someone else has done it!

  • Try Splunk

    http://www.splunk.com

    From
  • You can do that (and much more) with my logs management project Octopussy, but it's probably a little bit complicated for just what you need...

    But if you want to try it, I could probably help on that particular need.

    From sebthebert
  • What I did at work was simply to set up a bunch of terminals running tail on a big screen. It's been very effective, as you get all the errors in real time.

    Basically; - Set upp logmonitor account on all related machines, make sure it can read the log files, and not much else. - Generate SSH private/public keypair on the computer doing the monitoring and set it up on the machines you will be accessing. - Set up a bunch of terminal windows to.automatically load on boot and connect to each server and start tail'ing the logfiles.

    Make sure you use tail -F instead of tail -f, otherwise the log will stop scrolling when it's rotated.

    We set this up on a mac running Leopard, so it camke with everything we needed from scratch, ssh, terminal with "window groups" and profiles for running a custom command instead of the normal shell on connect.

    Of course, the next. Level of this would be to set upp your sysloggers to collect all the logfiles from all the different machines in one place.

    benlumley : mmm. was thinking about this as a start. will give it a shot.
    From xkcd150

Reputable Biometric Fingerprint Scanner & Access/Attendance Solutions?

Hi, I have a client, a school, looking to implement a fingerprint or hand scanner to track both employee time and student attendance.

As per earlier conversations on here, this is not for high security & access (i.e.- automatic door locks).

From what I've researched, the field is full of unknown companies, any based in China, offering brands that don't seem to have any reputation or case studies. It makes me very nervouse to recommend something from a market that seems quite unknown if not a tad nefarious.

Even the brand the client saw, and liked, makes me hesitate as when called, no one would quote a price and we were told a "dealer" would have to get back to us: http://www.galaxysys.com/index.php?tpl=readers/biometric/biometric

Any personal recommendations or experiences would be appreciated.

Thanks in Advance ~R

  • Bioelectronix has setups that start at $199 with attendance software, sells direct, and is based in the US. Seems like they might be worth a look.

    If you wanted to have a coder make something up for you, you could look at something like the Griaule api.

    From Adam Brand
  • I would recommend the Recogniztion Systems Handpunch units.

    We use Handpunch 2000 at work for our Time & Attendance duties for our janitorial staff (large airports, companies etc). I wrote the software using the .NET api (C#). We've got about 10 units running and another 7 on the way. Very stable and no problems/failures for the past few years. I have seen these units being used in data centers for identity verification in man traps and other door locks.

    The units are fairly expensive, each unit costs around 1000 bucks but it will give you the option of flexible connection (Modem, Serial, Network, etc.) as well as API in popular languages enabling you to either develop your own solution or purchase one.

    From J Sidhu

Need to know the commands to Start and Stop Microfocus Server

I have microfocus server installed on AIX. What I figured out is, all the servers are there at :

cd /var/mfcobol/es
# ls -l
total 40

drwxr-xr-x    2 root     system          256 Aug 13 09:53 ABCD
drwxr-xr-x    2 mfuser   system         4096 May 13 13:17 AISDEV
drwxr-xr-x    2 mfuser   system          256 Apr 23 16:40 AISPRD
drwxr-xr-x    2 mfuser   system         4096 Aug 06 19:07 AIXDEV
drwxr-xr-x    2 mfuser   system         4096 Aug 06 13:35 AIXPRD
drwxr-xr-x    2 mfuser   system         4096 Aug 06 13:28 AIXUAS
drwxr-xr-x    2 mfuser   system          256 Apr 29 19:59 ESDEMO
  1. As per the above results these are mounted on /var and Filesystem is /dev/hd9var. I would like to know the actions to be executed in case that we need (or we receive a request) to stop and start AIXDEV and AIXUAS partitions on AIX machine. (Also let me know whether my finding about the Microfocus server installation at /var/mfcobol/es is correct or not).

  2. Is there any other way to find out where exactly my Microfocus server is installed on my AIX machine?

  3. What are the commnads to start and Stop the Microfocus Server. For example if i need to start/stop only Dev derver (AIXDEV), what is the command for that?

  • A) How to start Microfocus Server

    To start Enterprise Server Administration, enter the following commands:

    su root
    cd $COBDIR/bin
    mfds &
    exit
    

    Then open a Web browser and specify http://host:86, where host is the machine on which Enterprise Server for UNIX is installed.

    Another way to start the Microfocus Server is by using the command:

    casstart -rAIXDEV (AIXDEV is the name of the server you want to start)
    

    Below is the example for the same:

    # /opt/microfocus/cobol/bin/casstart -rAIXDEV
    
    CASSI1872S Requested enterprise server instance already started 11:38:50
    

    B) How to stop Microfocus Server

    There are 2 ways to stop the Microfocus server, below is the description for the same.

    1. On Unix platforms, you can stop it on the left hand side panel of the Enterprise Server Administration page (Actions / Shutdown)

    2. Or from the commandline:

      mfds [-p port-number] –s 2 [username password]
      

    (shutdown the Directory Server and any associated enterprise servers)

    Please correct me if I am wrong anywhere