Thursday, January 20, 2011

Server-based homedirs and universal logon on a Linux network? (Other than with NFS+NIS?)

This question was kinda, sorta, already asked, but there's a security issue with the NFS+NIS setup that still bugs me: if you know somebody's username/uid, you can setup a *nix computer, create a user locally with the same name/uid, and presto, you can mount the server FS and it'll give you access to the user's files.

So my question is specific: is there an alternative setup that'll give no access to server files, no way, no how, unless you have a valid username AND password, even if you setup your own computer and place it on the network?

Users still need to be able to log on to their workstations and work with their server files without having to give their password all the time. It's OK if it only works in a Linux GUI like Gnome.

  • I'm no fan of NFS/NIS, but the security issue you describe is a configuration problem, not an inherent flaw: You can configure your NFS server to only honor mount requests from known hosts (or if you want to get fancy, restrict it by netgroup), so J. Random Hacker can't just plug in and start mounting stuff. See the exports(5) manpage for the details, or skim a copy of the O'Reilly NFS+NIS book for really good examples.

    (This doesn't solve any of the other problems with NFS/NIS that are inherent flaws, but it's the immediate fix for your immediate problem :).

    From voretaq7
  • Can't the security issues be overcome by using NFSv4 and kerberos? Kerberos was mentioned in the answer to your previous question but you don't mention it in your new question about security concerns.

    I don't see how a nfs share that requires kerberos authentication can be fooled by a simple matching username/uid.

    Tutorial for Ubuntu if that's what you're using can be found here.

    voretaq7 : Only caveat to NFSv4+Kerberos is that everything needs to support it. (Probably not an issue if everything in your environment is new, or at least pretty recent)
  • LDAP or Samba for unified logons. Any network filesystem should let you complete the equation.

    From alex
  • NIS is ancient, forget it even exists. You can use LDAP to achieve much the same thing in a much more modern way. I suggest you look at Kerberos and NFSv4 for user authentication. It's a fair bit of infrastructure to set up but once you get it all rolling it's not too difficult to maintain.

In FreeBSD 8 how can I recover a deleted slice/partition on a flash disk?

I accidentally deleted the primary partition on my USB flash disk.

Is there a way of undoing that? I can't even seem to see the drive anymore in windows? It's not available in the disk utility.

If I can even format it again (as msdosfs) that would be great.

  • If you don't care about the data, re-run the FreeBSD installer, select a "custom" installation type, select option 3 ("Partition") and create a new partition (you can press A to have the partitioning tool grab the whole disk, then just change the type to something Windows will like (7 = NTFS) and Windows should be able to see it & format it).
    Remember to write the new partition table (W) before exiting.
    Obviously be extra careful, since a mistake here (like selecting the wrong drive to re-partition) could really ruin your whole day...

    Obviously you should be able to do the same thing with the Windows partitioning tools (but I don't have any of those handy).


    If you want to actually get your data back the answer is somewhat more involved (You'd need to re-create the partition table as it was before, which presupposes you know the old layout. If not you start getting into disk exploration or recovery tools).

    Matt : Ok, so the windows PC didn't see my flash disk because the USB port on the front of the computer (not my normal one) doesn't work. Plugged it into the back and it's fine. Your procedure also works.
    voretaq7 : Yeah, a dead USB port would make the system not recognize the flash disk :-) Out of curiousity have you ever tried a thumb drive with a scrambled partition table on a Windows host? Curious how it reacts...
    From voretaq7

Internet Explorer 8 causing login prompt connecting to Sharepoint 2007 (from separate workstation)

We are using integrated windows authentication (I think NTLM). All other clients seem to be working fine, IE6 or IE7. The intranet site is in the trusted sites zone in IE.

First user to test said IE8 originally worked, but a patch may have broken it. I tested with an upgrade of my working IE7 install to IE8 and see the same behavior.

When going to the site we get the login prompt, we can authenticate via domain\username & password. We also get prompted again when accessing something from the document library.

Has anyone else seen this behavior?

Will Sharepoint 2007 SP2 or switching to Kerberos fix it?


The below may be related, but this references running IE8 on the server, in addition, we can still authenticate, just not automatically.

http://serverfault.com/questions/32345/ie-8-authentication-denied-on-local-sharepoint-site

  • Check to make sure your Trusted Sites zone allows 'Automatic login with current username and password'.

    Open IE, Tools -> Internet Options -> Security -> Trusted Sites.

    1. Click 'Sites' and make sure your SharePoint site url is in there.

    2. Click 'Custom Level', scroll to the bottom and ensure 'Automatic Logon with current user name and password' is selected.

    atom255 : Spot on Thanks!
    From Moo
  • I have the above settings but i still have trouble with prompting. does having the sharepoint site explicitly listed fix this right now we have trusted sites set as *.portal.com which should catch all the sharepoint servers.

  • Hi, I got the same promp in a anonymous access site, I mean, users can acces information without autentication and they try to open files then they are promted with a user name and password window.

    Using IE7, Chrome or Firefox does not present that window Can anyone some idea?...

Have I bricked my Sun V20z?

I have a small pile of Sun V20z computers. I was trying to update the SP and BIOS firmwares in order to bring them all up to the same standard -- mostly to get the updated (ie actually useful) SP functionality, and figured that I would just do the BIOS while I was at it.

For three of the four computers, it worked perfectly. However after the BIOS update, the fourth system won't boot.

I did this:

batch05-mgmt $ sp get mounts
Local Remote
/mnt  10.16.0.8:/export/v20z
batch05-mgmt $ platform set os state update-bios /mnt/sw_images/platform/firmware/bios/V1.35.3.2/bios.sp
This command may take several minutes.  Please be patient.
Bios started
Bios Flash Transmit Started
Bios Flash Transmit Complete
Bios Flash update Progress: 7
Bios Flash update Progress: 6
Bios Flash update Progress: 5
Bios Flash update Progress: 4
Bios Flash update Progress: 3
Bios Flash update Progress: 2
Bios Flash update Progress: 1
Bios Flash update complete
batch05-mgmt $ platform set power state on
This command may take several minutes.  Please be patient.

After an hour of waiting, it still won't start. The chassis powers on, but beyond the fans spinning up and the hardware POST of the drives, nothing appears to happen.

So if I try to re-flash the BIOS (on the theory that maybe something went wrong):

batch05-mgmt $ platform set os state update-bios /mnt/sw_images/platform/firmware/bios/V1.35.3.2/bios.sp
This command may take several minutes.  Please be patient.
Bios started
Error. The operation timed out.

Have I bricked it?

  • This is what I found when I googled: Error. The operation timed out. updating bios sun V20z

    Procedure how to go back to the factory settings of the operating system of the Service Processor.

    Open the Sun Fire[TM] V20z chassis and connect the pins 1 and 2 at jumper 19. This will send the console output of the service processor to the serial port. Where to locate jumper 19 on the motherboard is described in document 817-5248 Sun Fire V20z and Sun Fire V40z Servers User Guide

    http://docs.sun.com/source/817-5248-21/prefnum.html

    This was the original question/answer:

    fixunix.com/sun/113979-problem-updating-bios-v20z-sp-went-ok-bios-wont-go-machines-wont-start-anymore.html

    David Mackintosh : Hmmm. Doesn't work. Reading closely, the supplied method resets the settings in the SP firmware. The revision of SP firmware was unaffected by this procedure. Also the BIOS remained unstartable. Google doesn't bring back anything more than variations on this theme. (And this question is the number 2 response now -- well done ServerFault.) Failing a miracle from a contact at Sun, I think I've bricked it. Not the end of the world, I have more of these coming.
    From aspitzer
  • From my wiki page documenting this:

    Possible Lead

    This Page describes a "Crisis Recovery Diskette" for the v40z, which is related to (yet different from) the v20z. Maybe there exists such a disk image for the v20z.

    Ooh, there is. According to that page, I need the file v20-cr.img from http://systems-tsc/twiki/bin/view/Products/ProdTroubleshootingV20z -- although that appears to be an internal system at Sun, so I can't get at it.

    A hint on Twitter promptet me to send an email to Bill Bradford at sunhelp.org, who had this to say:

    The only thing I can suggest is to try to find someone at Newisys (or what's left of it after they got bought by Sanmina-SCI), as the v20z and v40z were actually Sun-rebadged/branded Newisys Opteron servers (the v20z is the same system as a Newisys 2100). I just looked on their web page, but the "Support" section requires a login (can't get it to let me register) and they stopped making servers in '07. An email might turn up good results though.

    At this point I have spares so I don't need to dig into this any more.

  • Perhaps not a direct answer, but this may help someone.

    I just updated a v20z and v40z to firmware current as of March 2010.

    As I was also re-imaging the OS, I had a crash cart connected.

    After the BIOS update, the neither system still recognized a Dell USB keyboard. A Sun USB keyboard was recognized, if it was plugged in at BIOS startup.

    Unfortunately, the Linux kickstart CD, seeing the PS/2 ports, did not recognize any USB keyboard...

    Switching to a PS/2 keyboard resolved the problem.

Backup a hosted Sharepoint

One of my customers has outsourced their Sharepoint and Exchange services to a hosted services provider. I believe it is a Sharepoint 2007 service. It is a shared hosting solution, so we do not have any kind of access to the server itself; we only have user-level and sharepoint-administrator-level access to the Sharepoint application.

They have come to the point where they would like to have a copy of everything that is on the Sharepoint server.

I have downloaded the Office Sharepoint Designer 2007, and it features three (!) ways to backup a Sharepoint server, none (!) of which work for me:

  • File->Export->Personal Web Package: When selecting everything, it calculates a negative size. Barfs with No "content-type" in CGI environment error.
  • File->Export->Sharepoint Template: barfs with a A World Wide Web browser, such as Windows Internet Explorer, is required to use this feature error.
  • Site->Administration->Backup Web Site: wants to create the backup .cmp file on the sharepoint server itself. I don't have access to any servers on the same network so I can't redirect it to any form of the suggested \\server\place. Barfs with a The Web application at $URL could not be found. [...] error. Possibly moot because Google tells me that bad things happen using OSD to back up sites larger than 24MB (which this site is most definitely).

So I called the helpdesk of the outsource provider, and got told that they recommend using OSD, but no they don't actually provide any application support for OSD (not that I blame them for that), but they could do a stsadm.exe backup and provide us with that, and OSD should be able to read the resulting cmp file. Then for authorization reasons they had my customer call them directly (since I can't authorize such an operation), and they told him that he didn't want a stsadm.exe backup, he wanted to get into an 'explorer view' and deal with things that way (they were vague). Google hasn't been much help in figuring out what an 'explorer view' is, let alone how I bring one up.

The end goal of this operation is to have a backup of the site as it exists (hopefully today, but shortly anyways) in such a format that we don't need another sharepoint server to restore it to. Ie we'd like to be able to pick individual content directly out of this backup. We are not excessively concerned with things like formatting. We just want the documents.

This is a fairly complex site with multiple subsites and multiple folders per subsite, so sitting there and manually downloading each file isn't really going to happen if there is a better easier way.

So, my questions:

  • Is the stsadm.exe backup what I want? If not, what do I want?
  • If I manage to convince them that I do want the stsadm.exe backup, can I pick files out of the resulting backup file with OSD?
  • If OSD isn't going to let me extract individual files, is there a tool I can use that can?
  • A semi hacky-solution which will get the documents backed up:

    Connect to the Sharepoint server using WebDAV. A sitewide read-only account would be ideal to prevent accidents. You may need to ask your provider to enable WebDAV; also, think about the security implications. Once connected, mount as a local drive in Windows (Tools --> Map Network Drive). Then use whatever synchronization tool is handy to back up the drive (Robocopy, rsync, etc.).

    This will save documents but not any previous versions, or any metadata associated with the documents. You will be able to get to documents but migrating to another provider will be a pain as you will have to recreate the document libraries individually and then re-migrate docs back into them.

    hurfdurf : Upon reading your question again, WebDAV probably is what the vendor meant by "explorer view".
    David Mackintosh : Semi-hacky is better than totally nonexistant. This got enough of the job done.
    From hurfdurf
  • The only method I know is with stsadm.exe This create a bak file which can be loaded on your new host. I do think it can only be used on the hosting provider side.

    Command Example :
    stsadm.exe -o backup -url http://sharepointurl.domain.com -filename c:\path_to_file\filename.bak

    I work for a Sharepoint provider, I will ask around and update my answer if there is another method that can be used.

    From Embreau
  • If the current provider won't help you back the site up properly cut losses before the site grows bigger and take your business some place that takes the job seriously.

Create Batch file for laptops to copy templates

Hi Everyone,

I am new to batch file programming and need an expert help to find out something. I want to create a batch file that runs at login and compares a local copy of the templates to what we have on the network. We have templates on a mapped network drive in h:\clients\templates, and on the laptop in c:\apps\data\clients\templates. When the laptop is offline it substitutes c:\apps\data to appear as h: drive, that way the files appear to be in h:\clients\templates. Here’s the basic steps what I need the batch file for:

  1. first it checks to see if the computer is on the network
  2. if it’s online it copies any newer files from Server onto the local drive
  3. if it’s not online it uses the local files and has them show up on the laptop as the same location as the network

I really appreciate any help/suggestions..

Thank you in advance, Hemal

  • Wouldn't it be much easier to use Windows Offline Folders to give you this behaviour automatically?

    Hemal : It should also point the templates to the same location as my network drive as our application looks for the network path for the templates....
    Zoredache : @Hemal, a properly configured Offline Folders setup should do that.
    From MikeyB

What SATA drive should I install FreeBSD onto? i.e. ar0 vs da0 vs ad4/5

I'm installing FreeBSD 8.0 on a server that has hardware SATA Raid.

I'm just wondering. What is the difference between these devices.

i.e. ar0, da0, ad4, ad5 I take it that ad4 & 5 are my two disks. Somehow the OS can see them individually even though it's one logical mirrored drive.

Should I be installing it onto ar0 or one of the adX disks. What is da0? it's smaller than the others. ar0 is not some kind of software raid device is it?

Just want to make sure I don't mess this up right from the get go.

  • It's hard to say without knowing the system configuration (controller type, etc.) but you probably want to install onto the arN device -- see http://www.freebsd.org/doc/en/books/handbook/raid.html (Hardware RAID - about 3/4 of the way down) & the ataraid(4) manpage.

    See also the gmirror(8) manpage & various bits of documentation on using that -- it's a bit more initial work, but I find it better documented than the ataraid stuff...

    Matt : ar0. da0 - the flash disk plugged into the server. ad4/5 the disks. Not sure what would happen if I install on either of those. After all it's hardware raided. Will it still mirror?
    voretaq7 : Most of the SATA RAID stuff is actually software RAID (which is why you're getting an `ar0` device) -- If you have what FreeBSD considers a "real RAID controller" you'd have a different device name. I'm honestly not sure what would happen if you have the SATA RAID option enabled in BIOS and you write directly to one of the underlying `adN` disks but my gut says "bad things", which is why you'd want to write to the `ar0` device (or switch off the BIOS RAID and do it with gmirror or similar :-)
    Andrew : Commonly known as fakeraid - http://en.wikipedia.org/wiki/RAID#Firmware.2Fdriver-based_RAID_.28.22FakeRAID.22.29
    From voretaq7

User does not exist when configuring Sharepoint Report Services

  1. Installing in a vm which is running Windows Server 2003
  2. Installed SQLServer 2005 with reporting services
  3. Installed ServicePack 2
  4. Installed SharePoint 2003
  5. Configured SSRS per (Installation/Configuration Guide for SharePoint Integration Mode) by Raju Sakthivel Architect
  6. Installed SharePoint Report Services Add-in
  7. When to SharePoint 2003 Admin Central
    a. Reporting Administration
    i. Grant Database Access
    1. Username, password - used login (part of Local Admin Group) - username doesn't exist.

Need help figuring out this issue!

  • (We are using MOSS 2007, so sorry about that, I was just confused when I said we were running SharePoint 2003, that was wrong) but I am using Windows Server 2003, I got it all working! Now I am ready to deploy it out to our clients, but my boss has a great question, which I don't know, maybe you could help me. Do you have to have any Service Packs with Windows Server 2003, or can my setup work without any Service Packs. OK, I know you have to have SQL Server 2005 Service Pack 2, but can you also have Service Pack 3?

    Thanks for your help!

Getting something out of an old Digi ST-1032

I'm trying to update a setup of a shipping and packaging unit, that makes use of two Digi ST-1032 'Terminal Server' units. I find it a strange name for the devices, but pre-the-nineties it apparently was the name for a device that offers a number of serial ports over a suitable bundled back-end, in this case SCSI.

The friendly people over at http://digi.com informed me they no longer support the device for about a decade, and no Windows XP drivers were written. So for now it looks like I'm stuck with the two (aging) NT4 servers that run the software controlling all the serial barcode-scanners and thermal printers that are connected.

What are my options, and what would you do? This is what I can come up with so far:

  • Keep the NT4 servers, just keep developing the software using the same Delphi 6 in use since the start.
  • Try to find out how to connect to the device directly and talk its speak. (I've been peeping around http://ftp1.digi.com but haven't found anything, I did saw some linux support when googling around, though.)
  • Upgrade the server hardware, but install Windows 2000 Server, which should be able to run the NT-drivers.
  • Install a virtual platform (e.g. VMWare) that is capable of patching through the SCSI device to a virtual image running NT4 or AIX or anything that can run the drivers, and use a homebrewn client-server or something like http://com0com.sf.net to patch the serial ports through to a decent server running the software.
  • Demand the budget get expanded to include new port-switches and retire the old SCSI ones (together with the NT4 servers)
  • Try to fit into current budget about 60 single USB-to-serial or TCP/IP-to-serial adapters (and learn to pray it works in seven languages)
  • What does this thing and the card it's plugged into look like?

    PCI? ISA?

    I've used similar "giant pile of serial port" devices in the past and basically you've got a bus extender to a box that has a stack of 16550 serial ports at exotic addresses. On the ones I've used, the thing that looks like "SCSI" is actually just a connector that connects the bus from the card to the box that has the electronics -- it isn't scsi or anything weird, just ISA with buffers to deal with the timing issues.

    If it is just a box of 16550s, drivers in this case aren't really an issue.

    Try booting the box off of linux and see if it finds the serial ports. Try drivers from companies that make similar devices, such as this place and see if they work.

    Stijn Sanders : It looks much like a network switch/hub, with RJ45's at the front and two SCSI2 at the back, with dip-switches to set the SCSI-ID and termination. 'scsiscan' actually lists the device, which inspired my second option I listed.
    chris : Yeah, sorry for the misdirection -- in the early 90's they did make scsi terminal servers for the unix workstation market but they're rare as hen's teeth. What you have is officially impossible to support.
    From chris
  • Option 5 is my bet.

    (Aside - you won't be running an AIX guest under VMware. VMware is x86 virtualization, AIX runs on RS6000 or Power chips - totally different architectures.)

    I would recommend that the software you're writing should depend on currently-supported hardware, so I'd look into the devices that come from the link @chris posted, or that EtherLite that you linked to, or anything else currently sold and supported by reputable manufactures. Since you're already familiar with Digi, maybe you should stick with them and ask them what the best thing to migrate to would be. Maybe they have something new that speaks a similar language to what you're used to with the old 1032 units.

    Craig : Its unlikely that VMware (or any other virtualization) will be able to expose an odd piece of hardware to a VM.
    Stijn Sanders : I mentionned AIX because digi still provides legacy drivers for that system. If I have a quick peek on this 1.0.8 VMWare I have here, it allows me to add a 'Generic SCSI device', and connect it to a physical device.
    mfinni : @Stijn - but you will never be able to run AIX under VMware. Period. VMware is only for x86 virtualization, and they don't have AIX running on x86.
    From mfinni
  • Like Chris, I've never heard of a Digi board that plugged into a SCSI port. We had a smallish Digi board many years ago to get about 8 serial ports on one PC for a guy who had to log the readings from a bunch of electrical meters, but that just plugged into an ISA slot. When the time came to replace that, we were lucky that he didn't need quite as many serial ports and we got a PC with 4 onboard and then got a couple serial-USB converters.

    If you don't need too many ports, you could go with serial-USB, otherwise the Etherlite products don't seem too expensive,

    From Ward

Post too large Exeception in Sun one application server

Hi I am getting following exeception after posting the lasrge amount of Data.

[#|2010-03-01T23:36:49.764-0600|WARNING|sun-appserver-ee8.1_02|javax.enterprise.system.stream.err|_ThreadID=31;|java.lang.IllegalStateException: Post too large
        at org.apache.coyote.tomcat5.CoyoteRequest.parseRequestParameters(CoyoteRequest.java:2607)
        at org.apache.coyote.tomcat5.CoyoteRequest.getParameter(CoyoteRequest.java:1139)

How to limit web access by password and IP address using .htaccess ?

We're trying to lock down our administrative site by requiring a password AND requiring that the request is coming from an authorized IP address. We've figured out how to do both separately, but can't figure out how to combine the two.

AuthName     "Restricted Access"
AuthUserFile /usr/www/users/directory/.passwd
AuthType     Basic
Require valid-user
Order deny,allow
Deny from all
Allow from 79\.1\.129\.85
Satisfy Any

This is the closest we've come. BTW, we also want to be able to enter multiple IP addresses on the white list.

Windows Server 2003 -- Firefox 3.6 deleting all downloads after they're done

No idea what's going on. New to Windows Server 2003. I can see the file in progress in Windows Explorer but then it disappears as soon as the file is finished downloading.

What is going on? 0.o

  • Have you got an overly-protective anti-virus quarantining your files?

    Other than that I have no idea!

Testing a UPS by unplugging it from the wall

At an Not For Profit/Charity I volunteer for occasionally we were very fortunate to be provided (donated) with a brand new 3Kva APC UPS. This is total overkill for our modest server rack (four mid-range servers and a switch!) but hey, I'll take what I can get!

Pressing the TEST button on the front indicates that yes, the UPS does work. Brilliant. But it only tests it for about 15 seconds.

My question is - will I degrade the UPS by unplugging it from the wall to see how long it will last? My plan is to unplug it, and wait until the battery meter reaches its last LED before plugging it back in, so that I know about how long I will have in the event of a power outage.

Do people do this on a regular basis? I'm guessing no (Lead Acid is very different to Li-Ion batteries)... but what kind of harm would it do if this were to happen (on purpose) every 6 months?

  • No, it will not degrade it. Feel free to test - just be careful since you may not have enuf time to do a clean shutdown. Computers/Servers use more power when booting up and shutting down compared to just sitting idle.

    Another way is if you can determine what power your server rack draws and creating a similar load and see how long it stays up. But its never a good idea to use 100% of the juice from batteries since you can never tell how much juice they actually have. I'd be careful going over 75% of the total time.

    If you need the maximum uptime, i'd look into powering down unwanted servers and letting just hte critical servers run..

    chris : For a first test I'd just plug in a couple 100 watt light bulbs.
    Farseeker : Hehe incandecent bulbs are illegal in Australia now. You can still use them but you can't buy them. You can only get the fluro energy efficient ones, and I think you'd need a *lot* of them to get any kind of load!
    J Sidhu : haha come on guys, there's always other options: Workstations, heaters (u can control the temp and thus sort of control the flow of current), hair dryers?, etc.. Im sure you can come up with many ways to burn electricity =)
    From J Sidhu
  • Testing with an unplugged UPS is very similar to testing a real power failure.
    This would add a nearly-full recharge-cycle to your batteries.
    You are correct about the difference with Li-ion batteries.

    If you have a software link with the UPS, it will trigger alarms and eventually a shutdown.
    You could re-plug on the first alarm.

    Caveat: Remember that after such a test you are left with no charge on the batteries
    a real power failure at this time would leave you with no backup power.
    And, Lead-Acid batteries take longer to recharge -- extending your critical no-backup window.

    You can read up more on Lead-Acid batteries at the Battery University page.

    It takes about 5 times as long to recharge a lead-acid battery to the same level as it does to discharge.
    On nickel-based batteries, this ratio is 1:1, and roughly 1:2 on lithium-ion.

    Farseeker : Good point - don't want a real power failure after the test! I notice that it's almost always compensating for over-voltage as well, which kind of indicates the quality of the power in the building...
    From nik
  • First off, that most certainly is not overkill. It's a reasonable size for what you have. As for the testing, if you have Windows install the accompanying software (or download it from the APC site) and use it to perform the testing and calibration of your UPS.

    It's worth mentioning that, depending on model, an APC UPS will normally self test every week or month, which includes running down the batteries to determine health and run time.

    You might also consider reducing the sensitivity setting of the UPS if it is frequently compensating.

    Farseeker : I though it might be overkill because only one the little LEDs that indicate load lights up, and only after I power on the last server (before that, it's got 0 lit up). I haven't done any configuration of it yet, so I'll take a look at the sensitivity settings on it.
    John Gardeniers : One light is nice. That means you have a good run time and the UPS should never get stressed. :)
  • If it's a 3Kva APC UPS chances are real good it has either a serial or USB port on the back (hope for the USB). IF that's the case, you can connect a Windows or Linux box to it and run PowerChute (available for download at apcc.com). It should tell you the expected run-time of the UPS itself. Since it sounds like you have a light load, it may be pretty long ;).

    However, upthread you indicate you're getting a lot of overvolts. This will unfortunately reduce the lifetime of the UPS itself as it'll be dealing with all that dirty power. Perhaps once or twice a year run an unplug-from-wall test, and watch the run-time level in powerchute to see if it is still accurate. If it starts decrementing a minute every 30 seconds, which I've seen happen, you know your runtime estimates are buggered and it's time to retune your shutdown procedures. And it's time to get new batteries.

  • Once you have the PowerChute software installed and running, you can also install additional clients thereof on the other servers and configure the whole lot for automatic shutdown once the battery level drops below a certain percentage, triggered by the one server that has the USB/serial connection. The newer versions of UPS will also support this via Ethernet, but this will require an additional plug-in card. This can be very helpful, especially with database servers (which don't like hard power failures at all).

    From wolfgangsz
  • You should consider turning off the circuit breaker to the outlet running the rack in lieu of unplugging the cord from the wall. The UPS is losing its electrical ground when you unplug it from the wall. While it's unlikely that anything would go wrong, the UPS designers "expect" that path to ground to remain available at all times, and if something did short during your test you might see sparks (smoke, flame, etc) when the electricity takes another path to ground. I've unplugged UPSs from the wall for testing before, but seeing a flash of "lightning" and hearing a loud "bang" coming out of a UPS during one such test gave me "religion" about not doing that again. After talking to an electrician friend I decided that, from then on, I'd do UPS tests that didn't interrupt the ground to the UPS.

    BTW: The PowerChute Network Shutdown software from APC is garbage. You might have a look at apcupsd. It runs under a variety of operating systems (Windows included) and is much easier to configure (and to replicate the configuration on multiple servers via copying files) than the APC alternative.

    Farseeker : It's installed in a church building, so religion is not hard to come by there :) That said, I don't particularly like the sound of anything in your story so I'll keep that in mind. It's on a dedicated 15 amp circuit so killing the power shouldn't affect anything else.
    kmarsh : +1 that PCNS is garbage. Take a look at the image size when run on Sun's Linux 64bit Java. Pay attention to the defaults when using the automated install. After much effort I finally got APC to open bug reports on these and other issues, a year later gave up as they would do nothing about them.
    kmarsh : DANGER DANGER DANGER A 3Kva UPS should NOT be on a 15A circuit!!!!
    Farseeker : Uuh, why not? It specifically advised in the manual to connect to 15A (We're in Australia, 240 volts)...
    SpaceManSpiff : Higher voltage means less amps are required for the same amount of "power" So 15amps in NA, cannot supply as much power as 15amps in Australia. If the manual says you're fine then you are.
    Cristian Ciupitu : What if the socket doesn't have ground?
    Evan Anderson : @Cristian: If your equipment has a grounding prong on the plug then it needs to be plugged into a grounded outlet. If something shorts and there isn't a ground in the outlet then the electricity will find some non-preferred route to ground-- possibly through *you* when you touch the equipment (potentially making you dead).
    Farseeker : **Every** socket in Australia has a ground, thankfully.
  • I'm going to make this point loud and clear.

    DO NOT TEST THE UPS BY UNPLUGGING IT

    You break the ground, which means if any of the hardware has a short, and there is no other ground it will go though you to get to ground if you end up being the shortest path. If everything is working fine this will never happen, but hey if this never happened there'd be no need for a ground.

    Best way, is to have the outlet that the UPS is plugged into able to be switched on and off so that the gound and netural will remain intact though the test. The breaker can do this or you can do local high quality switch. If you can't do that, then put a VERY GOOD (ie $30 to $60 range) power bar with an off switch between the wall and the UPS, make sure you label the switch for what it is for. The point in the other post that is mentioned in the comment is to NOT over load the powerbar, doing it this way is better then unplugging it. You can now switch off the line in and simulate a power failure, this will leave the ground and netural intact.

    You can test by letting it run down, although crude it will work. Also if the software has a calibration option it will do that for you and run it ever 6 months or so. The run time will degrade over time, so if you are using monitoring software to shut down the servers at say 15%, that 15% will change over time and it can correct for that.

    For your voltage issue, if you can run a decided line from the power box to the servers and us it only for the UPS. Things like tube lights, fans, motors, etc will create dirty power. Having the server on its own will help that since it moves it to a circut further away for those items. If its still happening it might be worth getting an isolator put in or it could be that your utility power is just really bad. Put a good meter on to a line and see what its really reading. It needs to be a good meter because I've seen cheap ones be off by 5 volts and thats enough to cause a UPS to go into over voltage so you need an accurate number. If this is a church, there is a chance you have an electrican as a member that could help out.

    Here are reference links to grounding and daisy chaining UPS

    Grounding

    Daisy Chaining

    Kara Marfia : Plugging a UPS into a power strip is apparently a Very Bad Idea. http://serverfault.com/questions/29288/ups-and-power-strip-interactions
    SpaceManSpiff : It's good to understand why its a bad idea. The reason it's a bad idea is because people over load the plugs by thinking adding 20 plugs allows them to plug more stuff in when the line has a finite limit. In this case it's better to use a power bar with a switch then to unplug the UPS and the plug is also on a dedicated line. The best way is to make it a switched outlet that the UPS is plugged into though. I updated my answer to reflect that and added some links.
  • Testing the batteries to the last led equates to roughly 80% Depth of Discharge. Doing this test on a regular basis will reduce the life of the batteries much more than a more shallow depth of discharge. I would recommend only testing to 50% Depth of Discharge.

    Battery life is directly related to how deep the battery is cycled each time. If a battery is discharged to 50% every day, it will last about twice as long as if it is cycled to 80% DOD. If cycled only 10% DOD, it will last about 5 times as long as one cycled to 50%. Obviously, there are some practical limitations on this - you don't usually want to have a 5 ton pile of batteries sitting there just to reduce the DOD. The most practical number to use is 50% DOD on a regular basis. This does NOT mean you cannot go to 80% once in a while. It's just that when designing a system when you have some idea of the loads, you should figure on an average DOD of around 50% for the best storage vs cost factor. Also, there is an upper limit - a battery that is continually cycled 5% or less will usually not last as long as one cycled down 10%. This happens because at very shallow cycles, the Lead Dioxide tends to build up in clumps on the the positive plates rather in an even film. The graph above shows how lifespan is affected by depth of discharge. The chart is for a Concorde Lifeline battery, but all lead-acid batteries will be similar in the shape of the curve, although the number of cycles will vary.

    http://www.windsun.com/Batteries/Battery%5FFAQ.htm#Lifespan%20of%20Batteries

    From Tony
  • "indicate you're getting a lot of over volts" - APC UPS sometime come with low max voltage settings which will cause lots of overvoltage events. Raise the limits so they are in line with Australian spec. Many of the units I looks after in the country frequently see variations in voltage and many are on the high and of 250 volts..

    From DR
  • Read the manual. :) (Though that can be a bit difficult to find on APC's page...)One thing I've not seen mentioned anywhere is that there is a user-initiated "Runtime Calibration" option on APC's stuff. You can do it either via powerchute software (I recommend the "Business" edition) or via some combination of the test/power buttons on the front. As the documentation states, you want to do with a similar but non-important load.

    From Scandalon

Network sharing on Server 2008 asking for login

I sometimes struggle with Window's weird file sharing. On the face of it it seems simple but there are these gottya's that catch me out.

I've shared a folder on the root of the C drive and given everyone read access, just as a test. In network sharing centre I've disabled password protected sharing. Whenever I browse to this server from a different machine (\machine) I get prompted for a login. When I try to browse directly to the folder (\machine\public) I also get prompted for a login. So dispute the fact that password protected sharing is disabled it still isn't allowing anonymous access. Suggestions would be very much appreciated. Thanks

  • If you want to access the sharing on Windows Server 2008 computer from Windows XP and Windows 2000, you have to perform the following stesps:

    1. Change the settings in Network and Sharing Center

    Please navigate to Control Panel >>> Network and Sharing Center, and ensure:

    1. Fire Sharing: On;

    2. Public folder sharing: Turn on sharing so anyone with network access can open files (or Turn on sharing so anyone with network access can open, change, and create files);

    3. Password protected sharing: Turn off password protected sharing

    4. Change the security settings in gpedit.msc

    Please navigate to gpedit.msc >>> Computer Configuration >>> Security Settings >>> Security Options

    Please change Network access: Sharing and security model for local account from “Classic-local users authenticate as themselves” to “Guest only-local users authenticate as Guest”

Ticket system with multiple users in charge

I'm working in the administrative department of a large university, and I'm trying to find the right trouble ticket system. While I have tried Trac and OTRS already, there is a missing feature I need, and maybe someone can tell me if such a thing exists.

Imagine for a second you are working in a place where your users won't share information with each other unless they absolutely, positively have to do it (and sometimes, they won't share it unless you take it from their cold, dead binders). From here, feature number one would be "if I create a ticket, no other user should be able to see it" (this is why I had to ditch Trac, changing the ticket url makes really easy for users to spy on each other). But it gets more complicated. Lets assume that user A really messes up his reports (as in "I lost $200.000, which is casually the cost of my new car"), so a new ticket is created. Now, users B and C (say, the legal department and accounting) must also be kept in the loop, and maybe someone else in the future, so each change in the ticket must be informed to all of them, and also they have to be able to add details to the ticket, which leads me to feature number two: it should be possible to add an arbitrary number of users into a ticket.

Could anyone tell me if such a marvelous system exists? Or, even better, tell me how is such a marvelous system called? (call me a geek, but if someone answered "Yes" it would be my fault for not asking correctly).

Bonus question: it would also be a nice thing if it could be translated into spanish and had support for LDAP. Oh, oh, and it should make coffee too!

  • Try RT (http://bestpractical.com/rt) - sounds like it can do everything you need.

    Bonus Answer: RT can authenticate against a bunch of stuff (LDAP among them). Not sure about translation though, and the coffee it makes is atrocious. :)

    Warner : Request Tracker is by far my favorite ticket system to date.
    voretaq7 : Remedy is a nicer system (especially for the end-user/support folks or if you're doing helpdesk-type stuff), but it costs more than the GDP of some small nations, and RT can do everything it does with a little elbow grease
    ErikA : +1 for RT. With a "grass is always greener" mentality, I go try other ticketing systems from time to time, but always come back to RT.
    From voretaq7
  • Jitbit Help Desk

    http://www.jitbit.com/helpdesk.aspx

    $600 one-time-fee (includes source code) ASPX SQL based Active Directory Integrated

    There are some bugs based on our usage, but the software is light, simple to deploy and because it's SQL based you cabn run your own custom queries on it (we use it for department time allocation when we bill out for work)

    You open tickets on behalf of users, but when making comments on tickets you can add other users as recipients, make tech only comments, publish tickets to Knowledge Base, etc.

  • We use Spiceworks which is free. (Ad supported.) It authenticates to AD, which is a bastardized LDAP, so if it's not supported out of the box, there should be a plugin, or even just instructions, for it. Users can only get access to their tickets, but you can "CC" others as needed so they can track progress. (You can force HTTPS as well.)

    English is the only ootb language (besides pig latin), but checking the language pack page, I see 19 results for "Spanish", so whether it's Mexican or Columbian variants, I think you're covered there...

    Spiceworks does lots more, as well. The network mapping tools is okay, still in beta and gets confused on some complicated setups, but the reporting and alerts can make life easier, once they are set up. It won't make coffee for you, but if your coffee maker is networked and supports SNMP, it can tell you when it's ready! :) I don't think it can you find a better work environment though.

    Hondalex : +1 on Spiceworks, we use it here in the office and I really like it and it's free.
    From Scandalon
  • I really like osTicket:

    http://osticket.com/

    It functions very well, has a knowledgebase, email notifications, multi-user, and it is very easy to install.

    From Physikal
  • Try redmine, it's gpl and you can add "watchers" to the tickets.

    From sntg

Impossible to connect to VSFTPD from distant server

My FTP server is a CentOS 5.4 with VSFTPD.

When I try to ls after connecting to my server using FTP I get this :

ftp> ls
229 Entering Extended Passive Mode (|||12206|)
ftp: Can't connect to `000.000.000.000': Connection refused
500 Illegal PORT command.
425 Use PORT or PASV first.

I can do mkdir without any problem.

When I connect from the same server to my ftp server I have no problem.

Port 20 and 21 are open in my iptable. How can I fix that?

Thanks!!

UPDATE :

telnet myftpserver.com 20
Trying 000.000.000.000...
telnet: connect to address 000.000.000.000: Connection refused
telnet: Unable to connect to remote host

and

[root@internal vsftpd]# /sbin/iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
RH-Firewall-1-INPUT  all  --  0.0.0.0/0            0.0.0.0/0           
SSH_CHECK  tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:22 state NEW 

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
RH-Firewall-1-INPUT  all  --  0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain RH-Firewall-1-INPUT (2 references)
target     prot opt source               destination         
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:33988 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:3306 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:20 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:21 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:80 
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     icmp --  0.0.0.0/0            0.0.0.0/0           icmp type 255 
ACCEPT     esp  --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     ah   --  0.0.0.0/0            0.0.0.0/0           
ACCEPT     udp  --  0.0.0.0/0            224.0.0.251         udp dpt:5353 
ACCEPT     udp  --  0.0.0.0/0            0.0.0.0/0           udp dpt:631 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           tcp dpt:631 
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
ACCEPT     tcp  --  0.0.0.0/0            0.0.0.0/0           state NEW tcp dpt:22 
REJECT     all  --  0.0.0.0/0            0.0.0.0/0           reject-with icmp-host-prohibited 

Chain SSH_CHECK (1 references)
target     prot opt source               destination         
           all  --  0.0.0.0/0            0.0.0.0/0           recent: SET name: SSH side: source 
DROP       all  --  0.0.0.0/0            0.0.0.0/0           recent: UPDATE seconds: 60 hit_count: 4 name: SSH side: source 
  • The problem is related to the fact that Passive FTP uses ports other than 20 and 21. Read about it here: http://slacksite.com/other/ftp.html

    Usually I will setup a port range in the vsftpd.conf file for the Passive FTP and then open these ports on the firewall.

    Also, I'm mostly a freebsd guy but I'm pretty sure there is a way on linux to dynamically open the FTP PASV ports but someone else will have to chime in on that one. I'll look and see what I can find.

    EDIT:

    First hit on google: http://www.cyberciti.biz/faq/iptables-passive-ftp-is-not-working/

    Better explanation: http://www.sns.ias.edu/~jns/wp/2006/01/12/iptables-connection-tracking-ftp/

    Warner : For netfilter and passive FTP in Linux, make sure the `ip_conntrack_ftp` module is loaded or compiled in if monolith. `lsmod` to list and `modprobe` to load. Ha, TFA says that too.
    einstiien : @warner, Thank you, yeah that's what I found in that second link I posted.
    benjisail : What is doing `ip_conntrack_ftp`?
    einstiien : What it does is monitor the FTP traffic for the PORT command being sent out to the client. When it sees this command it looks at the port that vsftpd is opening (a random port > 1024) and then dynamically opens this port in the firewall.
    benjisail : So i fix my problem by doing this : `# vi /etc/sysconfig/iptables-config` `IPTABLES_MODULES="ip_conntrack_netbios_ns ip_conntrack_ftp"` `# /sbin/service iptables restart`
    einstiien : You also need to add the rules mentioned in the link above, a better explanation is here: http://www.sns.ias.edu/~jns/wp/2006/01/12/iptables-connection-tracking-ftp/
    From einstiien