Wednesday, January 12, 2011

Cisco PIX - What does this line do?

I found this line in among many other ACL lines in my PIX. It looks different than the rest of them. It's at the end of the rest of the ACL lines, including being after access-list acl-out deny ip any any.

access-list 110 permit ip 165.138.236.0 255.255.255.0 165.139.2.0 255.255.255.0

What does it do, and what are each of the parts? The rest of my ACL lines end with something like any eq 1234.

Thanks in advance!

  • It's allowing all IP traffic from the 165.138.236.0/24 subnet to the 165.139.2.0/24 subnet. It's probably being used as a match list on a VPN tunnel or to prevent NAT on tunneled traffic.

    The number, 110, is just an arbitrary number to identify the access list. "Permit" indicates that it will permit the traffic (as opposed to deny). "IP" indicates to match the IP protocol (as opposed to a protocol number, or TCP, UDP, ICMP, etc). The 165.138.236.0 and 255.255.255.0 identify the source network. The 165.139.2.0 and 255.255.255.0 identify the destination network.

    For more in depth info, have a look at: http://www.networkclue.com/routing/Cisco/access-lists/index.aspx

    eleven81 : A stellar answer. Thank you!
  • It allows IP traffic in general from 165.138.236.0/24 to 165.139.2.0/24.

    The eq 1234 in the other rules specify ports, but there are no ports in IP. To match a port, you have to specify TCP or UDP in the rule.

    From MikeyB
  • That ACL is to allow all the traffic from one subnet to the other one on the same line.

  • You should look at the rest of the config to see if there is a rule such as VPN or NAT referencing ACL 110

    doing a:

    sh run | i 110
    

    will give you all the lines thave have 110 - i realize there is the posibility of fluff from ip address ect, but it shouldn't be too much.

    From Zypher
  • http://www.cisco.com/en/US/docs/security/pix/pix63/command/reference/about.html

    From XTZ

List of GPO's

How do I get a list of GPO's applied against a machine in a Win2k8 environment remotely?

I would prefer a powershell solution, but anything will help.

  • GPRESULT

    http://technet.microsoft.com/en-us/library/cc733160(WS.10).aspx

    Works against remote computers.

    In windows 2003 we had to type GPRESULT only in the command line in order to view the Applied Group polices however in windows server 2008 we need to type the /R switch after the gpresult.

    From Rob Bergin
  • GPResult as Rob mentioned is an excellent way of doing this. You can however also get at this information via the WMI RSP Classes. Check out http://msdn.microsoft.com/en-us/library/aa375082(VS.85).aspx for further information as this may allow you to do this in powershell against all of your machines in the environment.

  • There is 3rd party tools that supposedly do it in PowerShell here: http://www.sdmsoftware.com/freeware

    Particularly: Out-SDMRSOPLoggingReport: Creates and XML or HTML Group Policy Results report

    From TheCleaner
  • The Group Policy Management Console can also do this in a GUI manner, and has nice features such as letting you see exactly which policy each setting is coming from.

    From mh

How to retrieve the IP Address Assigned to the machine by ISP

Hello Friends

my question is regarding the IP Address assigned to a machine, whenever we go to any site which reveal our IP Address, they display a number which is assigned to the machine through which we are accessing the Internet, i want to know how is it possible to retrieve the IP Address from the local machine using which commands, if any, rather than going to the sites to know the IP Address assigned to the machine by ISP. the local machine is having an IP Address of 192.168.1.2 and is having a DSL router provided by the ISP which will connect to the Internet with Win XP SP2

Looking for favorable replies.

Thanks

  • Login to your DSL router and look at the status page. If you want to log this information build a script that simply downloads the status page.

    You could also write a script with curl/wget that simply gets the page (http://checkip.dyndns.com/) on occasion.

    David Pashley : This obviously depends on the DSL router being configured to allow customers to gain access to them. Many consumer grade ISPs tend to ship routers with no user-visible interface.
    From Zoredache
  • You're just looking for the IP assigned to your NIC? Execute:

    IPCONFIG
    

    from a command prompt.

    Edit: Re-re-re-reading your question again, it looks like you're asking for the IP address assigned to the Internet-side interface of your router. That's not stored on the local machine. You're going to have to write something to get it from the router itself, or rely on a third-party site to get you that.

    Sam Cogan : This will only work if they are directly connected to the internet, I believe he is looking to get the external IP when connected to a NAT router
    grawity : Also, what if user has many NICs? What if a NIC has more than one IP?
    Evan Anderson : You're downvoting me for at least trying to parse his poor English? Yeesh! I guess I should ask for clarification in a comment rather than in an answer first, eh? Yeah-- if he has multiple NICs he's going to see multiple IP addresses. Far be it from me to assume that the poster has some degree of competence and would be able to figure that out.
    Sam Cogan : I agree that the question is poorly phrased, and your answer could be a valid response, I don't feel it deserves a downvote.
  • You could download WGet then run the following command:

    wget -q -O - http://whatismyip.com/automation/n09230945.asp
    

    You will need to run this either from the WGet directory, or add it to your systems Path.

    Kevin Kuphal : I think this is the best option to do it programatically. It is unclear from the question whether the information can come from an online source or must only come from information stored on the local machine.
    From Sam Cogan
  • I like http://www.ipchicken.com/ .

    It doesn't provide the additional information that Kevin's site does, but I find the name sticks in people's heads and is easy to remember.

    From Peter
  • If your DSL router has SNMP, you can usually fetch via snmpget the IP address that is assigned to the external interface of your DSL router. Most, (but not all) DSL router manufacturers have SNMP, and they support MIB-II.

    example:

    [root@myhost ~]# snmpwalk -v1 -c ***** 10.1.10.1 ipAdEntAddr
    IP-MIB::ipAdEntAddr.75.146.91.10 = IpAddress: 75.146.91.10
    

    In this case, I am querying the "inside" IP with SNMP and I get the following IP as my external IP. where ** is my SNMP community string or password. ipAdEntAddr is the SNMP OID string i queried to get the answer.

    From netlinxman
  • You can download this: http://curl.haxx.se/latest.cgi?curl=win32-nossl

    Extract it, then go to a command prompt and type "curl http://whatismyip.com/automation/n09230945.asp"

    (same idea roughly as above by Sam)

    From TheCleaner

Securing SSH tunnels

We have an application that uses SSH to connect to a server : the application's network traffic (database, some custom TCP protocols, etc...) is tunneled through a SSH connection.

We use a key pair and an unprivileged account on the server, but the users still can use their key to login to the server, or do whatever port redirection they want.

Is there a way to configure the SSH server to allow only some tunnels (restricted on the tunnels' end address and port), and disable shell access ? (we use OpenSSH)

[edit]

I came across this article, it seems like removing shell access is not enough. Changed title and description accordingly.

  • Setting the user's shell to /bin/false may do what you're looking for.

  • I believe you could set the ForceCommand directive to /bin/false to prevent shell access.

    From mhud
  • In your authorized_keys file you can specify which command will be run when they login. You could simply set that command to run something that will just wait around for a long time. The sshd man page as a list of all the options you can use in your authorized_keys file.

    permitopen="tsserver.example.org:3389",no-pty,no-agent-forwarding,no-X11-forwarding,command="/usr/local/stm_shell.sh" ssh-rsa AAAAB3....
    

    My stm_shell.sh is this (it also enforces a 12 hour timeout). I am not 100% sure if this is completely secure.

    #!/bin/bash
    
    # send a hang-up after this process exits
    shopt -s huponexit
    # maximum session length in hours
    CONNECT_TIME=12
    sleep $[CONNECT_TIME*60]
    kill $PPID
    
    Dan Carley : Not so keen on the additional shell script, but the first part is the right answer.
    Zoredache : I should probably post as a separate question, but are there other ways to limit the total connect time?
    Dan Carley : Not that I'm aware of, using oSSH alone. The only timeouts relate to automatic keepalives. Bash has such a variable, but that's no use, because the shell should of course be /bin/false or equivilant.
    Luper Rouch : permitopen is what I was looking for, thanks. What exactly is the advantage of your script over /bin/false when used in combination with permitopen ? (besides limiting sessions duration)
    From Zoredache
  • Maybe the "ChrootDirectory" keyword in the sshd_config (man sshd_config) might give a little more extra security.

How do you do production IIS website depoys?

So, not sure if this is a Stack Overflow or a Server Fault question. If I have a .NET website that I want to deploy to the production environment, what's the best way to do so. Should I package it as an MSI & install? Use nant to push the needed files up. Just FTP the files up using Beyond Compare?

How do you deploy production code? This is a Windows specific case that I'm looking at here.

  • IIS supports xcopy deployment so just copying the files should be all you need unless you have special requirements.

    One way to do it is a simple script that uses ROBOCOPY to copy the new files to the server.

    If the site is large and this takes too long, use a version control system. I like Mercurial for this purpose, although you have to be careful that the version control system's configuration files don't end up being served to the public. Deploying is then simply a matter of committing the changes and then checking out the latest version on the server. In addition to being efficient, this allows quick rollbacks (if you tagged the last good version) in case your latest-and-greatest has a showstopper bug.

    To minimize downtime, you could have the script copy the files to a new directory and then quickly rename the directories, or change where IIS points to the new directory.

    Jonathon Watney : The version control system is appealing but for web sites that require compilation it might not work out too well. Unless a compiled version is kept under version control of course.
    Josh : I never thought of putting a source control system out in production. Interesting sure beats having to keep tons of extra zip files around.
    Luke : I do this all the time with Subversion. On Apache you'd use mod_rewrite to make sure users can't access the .svn directories. Using version control for deployment is definitely the way to go.
  • I'd further Joel's answer by suggesting a Continuous Integration server pickup your changes from your source control system. It will then build the project. Then have it xcopy the output of the build to a new folder. You can then do some quick config changes (web.config and app.config). Voila, ready for Xcopy!

    Check out CruiseControl.NET

    From pcampbell
  • oh jeeez, at work we have a whole team for this. They have an in-house tool that takes a server out of the cluster/farm, publishes the files, runs the NUnits, and adds it back into the cluster/farm. They do this for each of 16 servers. It takes hours. The rest of us don't even have "look around access".

    For my personal projects, I publish from VS2005 directly to my webserver. Kinda has less strict security.

    From tsilb
  • Consider using the Web Deployment Tool from Microsoft. It was specifically designed to help deploy web applications and updates to those web applications to production IIS 6 and 7 web servers and it does a better job of the task than MSI (Windows Installer), IMHO.

    Normally you use it by setting up a "gold master" site somewhere and then telling the tool to pack up the changes from there. It will then look at a target server for deployment and make any changes necessary to make it look like the gold master (which is useful for subsequent updates). It is particularly useful if you are deploying to more than one web server (i.e. a farm), and it has support for deploying more than just files (it can also handle making registry changes, deploying certs, SQL databases, etc).

    Portman : +infinity. This tool is a lifesaver and frees entire departments (a la tsilb) to work on more interesting problems.
    From Erv Walter
  • What I did at my previous employer, which was basically an auction/e-commerce site where we could not permit much downtime:

    • Take a zipped build version of the release/version to deploy on the build server
    • Test it on a staging server which has a copy of the production database and has the same version of software as the production software. Test that everything went smoothly. If not restart the deployment of the staging server (but first restore a backup).
    • If everything went well: copy build and database upgrade scripts to production server to a local folder. Take a specific backup of the database and the ASP.NET files (in case something still goes wrong). Prepare then everything so that I only have to click enter to launch the upgrade script and the copying of the database files (note that I could a create a script for this). Then launch everything. This is normally a matter of seconds and the users won't notice much that there has been downtime.

    There a lot of funnier things to do as a web developer. But this was the most crucial part of my work.

    From Michael

get progress database version on Unix

Is there a simple Unix console command to determine which version of a Progress database is running? I have root access to the Unix console.

Thanks in advance on any guidance!

  • If you look in your installation path's bin directory (usually $DLC/bin) you will find an executable called

    pro
    

    If you execute that with no parameters it should echo back some information like this, you'll notice that its letting us know the version near the end:

            @@@@@@   @@@@@@   @@@@@@@   @@@@@   @@@@@@   @@@@@@@   @@@@@    @@@@@
           @     @  @     @  @     @  @     @  @     @  @        @     @  @     @
          @     @  @     @  @     @  @        @     @  @        @        @
         @@@@@@   @@@@@@   @     @  @  @@@@  @@@@@@   @@@@@     @@@@@    @@@@@
        @        @   @    @     @  @     @  @   @    @              @        @
       @        @    @   @     @  @     @  @    @   @        @     @  @     @
      @        @     @  @@@@@@@   @@@@@   @     @  @@@@@@@   @@@@@    @@@@@
    
                               Progress Software Corporation
                                        14 Oak Park
                                Bedford, Massachusetts 01730
                                        781-280-4000
    
           PROGRESS is a registered trademark of Progress Software Corporation
                                  Copyright 1984-2004
                            by Progress Software Corporation
                                  All Rights Reserved
    
    OpenEdge Release 10.0B05 as of Sat Apr 15 00:44:33 EDT 2006
    

    P.S. I'm sorry you have to deal with progress.

    Jorrit Reedijk : Thanks for your reaction. On running the command I get a message "This version of PROGRESS requires a startup procedure. (495)". I know the the version has to go back at least 8 years, because that's how long the server is already running. P.S. Me too :)
  • In the BIN directory I have found some files using "ls pro*", including "proutil". This doesn't startup without a supplied databasename, but shows it's own version nevertheless.

    PROGRESS Version 8.3E as of Wed .... EST 2001 in my case.

    Mark Turner : Yeah that would do it. Sorry I didn't have any older Progress installs up anymore. I had a few Solaris 8 machine with progress running on them that had Progress 8. Now everything is on RHEL 4 or 5.
  • 1) There is a file called "version" in the installation directory ($DLC). The "pro" command cats this file on startup. You can too: cat $DLC/version

    2) There is also a command called "showcfg" which will provide all of your licensing data. "$DLC/bin/showcfg".

    From Tom Bascom
  • BTW -- Progress version 8 dates from the mid 90s. 8.3E was one of the last patch releases to v8.

    From Tom Bascom

How do I use Dvorak on OpenSolaris's console?

For more than 10 years, I've been meaning to try out Solaris, to broaden my system administration experience (most of which is currently with Debian, Ubuntu, and OpenBSD), not least because of the features that Solaris pioneered, such as ZFS and DTrace.

On top of that, OpenSolaris now has a user experience that was "inspired"[1] by Ubuntu, and looks like a fairly credible desktop system too (with my favourite theme, Nimbus :-P).

There is only one real hurdle, for me: the console has no Dvorak support:

It's true that in X, I can simply use setxkbmap dvorak (and it works when I tested it on OpenSolaris 2008.11), but there are some maintenance tasks that can only be done in single-user mode. It would be most ideal to remove the "cognitive dissonance" of using the system, by not having to switch back and forth between the two layouts.

[1] In the same way that CNProg was "inspired" by Stack Overflow. :-)

  • A found a wiki, albeit in french, that seems to have what you want.

    Chris Jester-Young : Awesome---obviously I can tweak the keymap from that site and use it. It's neat that the author of that site has keymaps and instructions for multiple OSs (although, at least for US Dvorak, most OSs already have the keymaps built-in). I'll wait a couple of days to see if others have good answers; after that, I'll pick a best answer. Many thanks!

For a small home network, is there any point to running Squid?

For a small home network -- two laptops, two desktops, plus the main server -- should I expect much gain by running Squid on the main server? I fully understand the value of running a caching name server for a small home network, but I'm not sure if there is any value to running an HTTP caching proxy.

The main server does full NAT for all other computers on the network, in case it matters.

  • I don't think it's worth it. You might see a small speed-up on commonly used pages (if they're used by both machines), but you're not going to save enough bandwidth or time to make it worth the hassle, IMO.

    From womble
  • If they all run the same operating system, and the update system on said operating system is caching-friendly, using a web cache would mean that you download each update once, not (in your case) four times.

    YMMV.

    Chris Jester-Young : Mind you, if they run Windows, you may win more by setting up WSUS than Squid. So yeah. Definitely YMMV.
  • If you have a small network, there's not much to be gained. I've had squid running on our home network. Last time I measured it, it was reducing the traffic by 6-8%. There's also the gain you get by removing the effect of latency on TCP setup. But unless multiple users are hitting the same site, the re-visit hits are likely as not going to come from your browser's cache anyway. If I didn't know the squid was running I'd be hard pressed to tell the difference.

    From John McC
  • Next to nothing or even slower unless all 4 computers go to the exact same websites all the time and still there wont be anything to get happy about, i properly spent all the time you would save typing this message :S

    From Shard
  • If you block all outbound traffic except for that from squid, you can get a nice view of what is sending traffic out of your network that you didn't otherwise know about. Some stuff will just bounce off your firewall and the rest you can see in your squid logs.

  • You probably won't really see much gains in network performance.

    What you do get is

    • A place where you can block annoying sites
    • you can integrate clamav to help block malware before it gets to your windows systems.
    • You can watch the access logs and see what computers are doing what. And possibly see requests from systems that where infected by malware. Or see when some stupid application is checking every 5 minutes for application updates.
    • Protect the Kiddies from sites you feel are objectionable when combined with blacklists.
    David : I'd say you get decent improvement if the internet link is a slow one, like dial-up or satellite.
    From Zoredache

What can cause Apache HTTPD to use 100% CPU indefinitely

An application running a lightly loaded Apache HTTPD 2.0 has occasionally had problems where one (or more?) of the Apache processes took 100% CPU. We currently run HTTPD 2.2, I we may have seen this with 2.2 as well. I'm not certain. In some cases, the CPU usage was such that it blocked all but console access to the Windows server hosting HTTPD. I have never been able to track down what can cause Apache to do this.

The environment is Apache HTTPD directly serving static content, using mod_rewrite but not much else custom configuration. HTTPD is talking to Apache Tomcat (5.x) via mod_jk (1.2.25).

Has anyone else encountered this and solved it? The workaround we installed is to limit each Apache HTTPD subprocess to a maximum number of requests with the following configuration:

MaxRequestsPerChild 1000

where because the application uses HTTP/1.1, this is really more than 1000 requests per child process and more like 100,000 requests per child process.

  • Limiting the MaxRequestsPerChild will help with memory usage but it shouldn't effect the cpu in the way you're talking about. What's likely to be happening is that your mod_jk is crashing and since it's an apache module it shows up under the httpd process.

    From wizard
  • When I've seen this it has been because: - a hosted app or script is causing the problem. Example, it has an infinite loop or something - the OS has become unstable, due to locking or some other issue where rebooting temporarily solved the problem.

    My suggestion: - reboot the machine. - wait and see if this happens again - restart the server with no mods,etc. - Start turning on each mod one by one and each time observe the usage.

    Eddie : In the case where this has been seen, the only application is my single web application plus management applications. No PHP or CGI, only Tomcat via mod_jk. Rebooting the machine always fixed it and this has happened rarely, once or twice a year at most. But it's a fatal problem when it occurs, which is why I am concerned about it.
    From cbrulak
  • I've actually seen this happen when you have a log directory that doesn't exist. I'm not sure why they don't handle that better but you may want to make sure that all the log directories are there and the process can write to them.

    From carson
  • It's most likely that the lock-up is happening in a module rather than in Apache itself. Your setup sounds pretty minimal, so I'd suspect mod_jk as the culprit. If limiting MaxRequestsPerChild fixes the problem then I'd say that's an acceptable workaround. It's possible that a bug in the module is only triggered after a long time or many requests, and unless you're really keen on tracking this down then making it go away is probably good enough.

    If you want to track it down then the first thing to do is configure CoreDumpDirectory to point to some location that the server user can write to. If you can get the offending process to leave a core file behind then it should help you track down the cause of the problem. You can find some hints on doing this in the Apache Debugging Guide.

  • install mod_proctitle for apache

  • RLimitCPU doesn't always help because not all portions of the apache code have checks for it.

    MaxRequestsPerChild may not help either, as I've seen this with relatively 'fresh' children.

    In my case, I suspect it's something to do with the module we're using (mod_perl) and perhaps something with a broken socket connection. We only seem to see this problem with browsers connecting, not from wget or curl (which we use heavily for 'data delivery').

    From ericslaw

Motherboard / RAM compatibility

With standard PCs, does any modern RAM generally work with any modern motherboard, so long as they can physically interconnect?

If not, is there a table I can consult?

  • Not quite, and don't waste your time looking at manuals and such. Your head will explode.

    Crucial has a Windows app you can just download and run on your PC so you buy the right kind of memory. Purchasing is then a one click operation. In my experience they are cheap and delivery is fast.

    You can also run an app called CPU-Z which will tell you what kind of memory is in your motherboard, how many free slots you have etc.

    raldi : That Crucial app sounds cool -- except I don't yet have the motherboard I'll be plugging the RAM into, and even if I did, I wouldn't be able to run the app because... I don't have any RAM. :)
    Omar Shahine : In that case, grab the specs for the motherboard ram and then punch them into Crucial or Newegg's webiste and you should find plenty of results.
  • Well generally things won't explode, you'll be safe there. However you'll need to consult the motherboard manual to find out the clock speeds it supports, only memory with a matching clock speed (or higher, it will just run at the speed of the motherboard) will work.

    From sascha
  • No, just because the RAM fits in the slot doesn't mean it will work.

    The best place to start is usually to check out the support web site, Manual or Specs sheet for the motherboard.

    If you have a system from a one of the major manufactures (Dell, HP, etc) you can usually look up exactly what by using the search tools on a RAM manufactures site.

    From Zoredache
  • Once I've installed 2 memory modules in the mainboard and all worked nice. After some hours using Windows, the computer shows the BSOD...

    The reason was the 2nd memory module was problematic (each module have 2gb - just when Windows used more than 2Gb the computer crashed. I noticed that then I ever use a memory test program like MemTest.

    From Click Ok
  • Despite the warnings above, most of the time, any memory of the proper type and speed will work. The people above are correct -- if you don't want to throw money away, consult the OEM, manufacturer, or use an application such as the one provided by Crucial.

    Most of the RAM problems I've had were problems of mixing RAM purchased at different times. One machine I had would work with either 512 Meg stick of RAM that I had, but it just would not work with both at the same time. I never figured this out.

    In my experience, you're much more likely to have a problem mixing RAM sticks purchased at different times than you are if you totally replace all of the RAM in the computer with matched memory sticks. So far, I've never bought memory and had it fail to work -- unless I was mixing memory sticks of different types.

    I know that it is absolutely possible to buy memory and have it fail to work. This appears to be much more true (in what I have seen) for systems that take special memory, such as parity memory. If you want to ensure that you don't throw money away, consult the right experts. Otherwise you are taking a (usually small) chance that the memory will not work.

    Note that at least in what I have seen, trying incompatible memory in a motherboard has never damaged anything. It just failed to POST or failed ot boot or failed to be reliable. In every case I have personally experienced, incompatible memory has not caused any damage to other hardware.

    From Eddie

Open Source Image Software

I need an open source disk imaging application (something like Ghost or Acronis). Which one would you suggest?

  • I've heard good things about FOG as a Ghost alternative. No personal experience to back that up, however, as I used a custom PXE solution before FOG was available.

    Alister Bulman : Floss weekly has done a podcast on FOG. http://twit.tv/floss53
    From cdleary
  • I like GParted and CloneZilla. GParted is my favorite for single-use, CloneZilla's best when you need to blast images out across the network.

    Nikos Steiakakis : I was thinking about GParted as well. Thanks!
  • I use dd. =D

    From Jauder Ho
  • it is not opensource but in a pinch, maxblast is very good.

    From jake
  • My personal choice is PING (Partimage Is Not Ghost) as I've used it considerably at work and at home, with much success. We've got base images for our most common machine models. 1.5 - 2 Hour builds down to 25-30mins depending on the machine.

    So far I've only tried backing up to another partition or a USB drive, though it does support backing up to a network drive.

    From thing2k
  • dd and dd_rescue.

    See also, this question: Using DD for disk cloning

    Mark Porter : The advantage of dd is that is available on almost every linux environment, including that old Ubuntu DVD you got in a magazine once. If you learn how to use the dd + netcat + gzip combo you'll have a very flexible tool in your mental toolbox for network imaging
    From Stewart
  • I like g4u and FOG.

    • g4u is more of a standalone system for single (or just a few) machines.
    • FOG is really a complete replacement for a ghost system and is really intended for large environments.
    From chills42
  • Designed more for backups than cloning, but mondo is pretty good.

    It creates bootable ISO's of your hard drive, from which you can easily restore to a different machine.

    From Brent
  • It is definitively not "open source", but maybe still good enough and has an API: Imagex / WIM files from the Windows Vista AIK toolkit. With that free download you get Windows PE and the imagex command line tool. It will make a file based image of your ntfs drive into a WIM-file which is single instance, compressed and stores everything you need (ACLs, owner, streams) and only skips pagefile if you want.

    It's very easy to use, e.g. over a mounted network share!

    From Christian

Relative failure rates for hardware components

Let's say I'm setting up a single machine server. Without knowing the specific components in it (and being able to look up their MTBFs), what are the typical relative failure rates of the hardware components in the server?

Equivalently, what are the rankings of the most often-replaced components across all the servers in corporate use?

    1. Hard Drives
    2. Everything else

    Best to keep spares of everything on-site, though, unless you're OK with whatever downtime your hardware vendor decides to give you.

    From womble
  • Anything that moves, which in a server is basically hard drives and fans, will fail much more often than solid-state components. Power supplies are a distant, but notable, second. Everything else (cpu, memory, etc) is pretty reliable... which is not to say immune to failure, but definitely should be worried about after you've got your disk/fan/psu bases covered.

  • You will see more problems with the firmware and drivers for the hardware than you will actually see physical failures (at least early in the device's lifetime), so make sure those are up to date and tested first.

    SATA drives will usually be the first to go. SAS tends to be more reliable. (Although I've heard good things about the latest SATA 2 drives)

    1. Hard disks
    2. Power supplies (all too common)
    3. Things you plug in and out (more common for desktops than servers)
    4. Everything else, especially after the power supply dies and takes things with it...

    Once upon a time, CPU fans also used to be on the list; lately, I can't remember the last time I saw one stop working, but it's a possibility, especially in a dusty environment.

    From Mikeage
  • Google has published a paper, "Failure Trends in a Large Disk Drive Population", about failure statistics for a wide set of drives. The main take away is that disks fail above and beyond what the MTBF would suggest. Disks are easily the most failure prone in the server room.

    From jldugger
  • About hard disks, many people misunderstand the MTBF and think a drive with a MTBF 100,000 hours will last, on average, for 11.5 years. What the manufacturer means is that in a collection of a large number of drives, N, all within their lifetime, that one drive will file for every 100,000/N hours. If you have 100,000 drives that each have a MTBF of 100,000 hours, then you should expect a drive to fail -- on average -- every hour.

    Hard drives fail more often than people expect. Back up, back up, back up.

    Anything with moving parts can fail, including tape drives, floppy drives, fans, and so on. I've had the fan on graphics cards die, causing the death of the graphics card. I've had the power supply fan die, causing most of the parts of the computer to die. (Since then I've never built a system without extra fans.) Tape drives require extra care, or their lifetimes will be significantly shortened. This is because not only does it move, but the tape head makes physical contact with the tape media -- at least in many kinds of tape drives. Cleaning the drive too often with ordinary tape cleaning media will wear away the tape heads.

    I've had the built-in chipset fans die, but so far without any effect. So far I've never had a CPU fan die, but I tend to upgrade often enough that I probably avoid this via upgrades. (grin)

    I replace my disk drives every several years (mostly because the capacity available increases so rapidly), so have experienced relatively few hard drive failures. I've had many power supplies fail -- many more than I would have naively expected for a component with no moving parts other than the fan. I assume that power irregularities are the cause of many power supply failures.

    So far, in a few decades of computing, I have never had a CPU or RAM or motherboard fail unless there was a reasonable cause, such as overheating (fans dying). However, a few brands of motherboards over the years have had much shorter lifetimes than expected due to sub-par parts, often incorrectly manufactured capacitors where power enters the motherboard.

    Anywhere that you have a plugged-in connection is a point of failure. I've had computers fail (mostly long ago) due to cheap tin-plated connectors. The tin oxidized and over time the connection because less and less reliable. Eventually I unplugged everything, took an eraser to the tin connectors to remove the oxidation, plugged everything back in, and was up and going for a while longer. Gold connectors are the connector of choice for a reason.

    From what I've seen in a corporate environment, with my home experienced mixed in, components seem to fail in this order, from most to least frequently.

    1. Hard drives and tape drives
    2. Power supplies
    3. fans
    4. distantly, everything else

    Not mentioned above, but you should expect all flash memory sticks/cards to eventually die, depending on frequency of use. But it will take a long time given the average use of most such cards. Flash memory "wears out" with use and memory cells will eventually fail.

    From Eddie
  • Anecdotally, batteries.

    I have no hard data, but I have replaced more failed or under-performing batteries in my life than any other component. This includes uninterruptible power supplies, laptops/notebooks, controller batteries, mobile phone batteries, and probably a lot of others.

    This has led me to always stock an extra battery pack for a server room's UPS.

    Eddie : +1, good point. All batteries have a lifetime, which depends on usage patterns and battery technology. Don't expect any battery to last forever.
    From Portman

Reverse proxy for HTTP acceleration

I provide hosting facilities for a high traffic website that will receive a spike in traffic in the next 2 months. In order to allow more it to perform better, I want to prepare myself and put a frontend server acting as a reverse proxy and direct traffic through it.

What reverse proxy do you suggest I use?

I've used Apache mod_proxy in the past with some good results, but is there something more performant out there, something more specific for the job? I need it to be fast, to do caching of all it can, and avoid doing requests when not needed.

I thought about HAProxy, but it seems to be more directed to provide High availability (multiple backend webservers). On this setup, the whole website is hosted on a single server, running some LAMP stuff.

  • I can't really speak about the relative performance, or how it will perform under massive load, but I have used Squid for http acceleration in the past. It works pretty good.

    If your web site has lots of dynamic content a cache may not be able to help you much. You may want to check that the web site is sending out useful cache control headers and not just immediately expiring everything.

    From Zoredache
  • Varnish appears to be a pretty popular reverse proxy.

    Also, I believe you can run Nginx as a reverse proxy too (using memcached as an option, I think.)

    Pablo Alsina : It's a nice option, and yes, it seems to use HttpProxyModule and HttpMemcachedModule (http://wiki.nginx.org/NginxHttpMemcachedModule). It seems somewhat limited though.
    From Evan
  • We've had very good experience with Varnish.

    Redpill Linpro, the company behind it, states on the Varnish product site:

    Varnish is a reverse Web accelerator designed for content-heavy dynamic web sites. In contrast to other HTTP accelerators, many of which began life as client-side proxies or origin servers, Varnish was designed from the ground up as an accelerator for incoming traffic. We actually claim that Varnish is ten to twenty times faster than the popular Squid cache on the same kind of hardware!

    Our experience is that this is very much true. In addition to being written with performance as a reverse proxy in mind, the VCL domain language for configuration is very powerful and you can get very detailed information about what it does while it works (see question 3425).

    Varnish is open source, and has a good community, while being actively developed by the company.

What are useful .screenrc settings?

Basically like some of my own that I've posted below. I'm looking for added functionality to the programme 'screen'. At the very least have a look at the last line for a fantastic 'menu bar' at the bottom of a screen session.

## gyaresu's .screenrc 2008-03-25
# http://delicious.com/search?p=screenrc

# Don't display the copyright page
startup_message off

# tab-completion flash in heading bar
vbell off

# keep scrollback n lines
defscrollback 1000

# Doesn't fix scrollback problem on xterm because if you scroll back
# all you see is the other terminals history.
# termcapinfo xterm|xterms|xs|rxvt ti@:te@

# These will let you use 
bind -c selectHighs 0 select 10 #these three commands are 
bind -c selectHighs 1 select 11 #added to the command-class
bind -c selectHighs 2 select 12 #selectHighs
bind -c selectHighs 3 select 13
bind -c selectHighs 4 select 14
bind -c selectHighs 5 select 15


bind - command -c selectHighs   #bind the hyphen to 
                                #command-class selectHighs 


screen -t rtorrent  0 rtorrent 
#screen -t tunes     1 ncmpc --host=192.168.1.4 --port=6600 #was for connecting to MPD music server.
screen -t stuff  1
screen -t irssi  2 irssi
screen -t dancing   4     
screen -t python    5 python
screen -t giantfriend   6 these_are_ssh_to_server_scripts.sh
screen -t computerrescue    7 these_are_ssh_to_server_scripts.sh
screen -t BMon   8 bmon -p eth0
screen -t htop   9 htop
screen -t hellanzb  10 hellanzb
screen -t watching  3 
#screen -t interactive.fiction  8
#screen -t hellahella   8 paster serve --daemon  /home/gyaresu/downloads/hellahella/hella.ini 

shelltitle "$ |bash"

# THIS IS THE PRETTY BIT
#change the hardstatus settings to give an window list at the bottom of the                                                                        
##screen, with the time and date and with the current window highlighted                                                                            
hardstatus             alwayslastline                                                                                                                          
#hardstatus string '%{= mK}%-Lw%{= KW}%50>%n%f* %t%{= mK}%+Lw%< %{= kG}%-=%D %d %M %Y %c:%s%{-}'
hardstatus string '%{= kG}[ %{G}%H %{g}][%= %{= kw}%?%-Lw%?%{r}(%{W}%n*%f%t%?(%u)%?%{r})%{w}%?%+Lw%?%?%= %{g}][%{B} %d/%m %{W}%c %{g}]'

  • I also can't live without the menu bar. One thing I do not like putting on the menu which a lot of people have is the time; it prevents PuTTY's scroll back from staying scrolled back (since it's considered a screen update)

    Murali Suriar : You could use screen's inbuilt scrollback? `C-A [` by default will put you into copy mode, and will allow you to navigate your current tab using keyboard commands like a text editor?
    From Mikeage
  • I also use a fairly involved caption/hardstatus line combination, to simulate the effect of dropdown tabs (the caption line is solid grey and the current tab in the hardstatus is the same color).

    I also have my shell tell screen what the current process name is and what directory I'm in, so my tab names stay up to date with what I'm doing in each tab. This is critical to remembering what I'm doing where without having to flick through all my open tabs.

     # don't use the hardstatus line for system messages, use reverse video instead
     # (we'll be using it for the list of tab windows - see hardstatus alwayslastline
     # below)
     hardstatus off
    
     # use the caption line for the computer name, load, hstatus (as set by zsh), & time
     # the caption line gets repeated for each window being displayed (using :split),
     # so we'll use color cues to differentiate the caption of the current, active
     # window, and the others.
     #    always                  - display the caption continuously.  Since
     #                              hardstatus is 'alwayslastline', it will be on the
     #                              next to last line.
     #    "%?%F"                  - if (leading '%?') this region has focus ('%F') 
     #                              (e.g. it's the only region being displayed, or,
     #                              if in split-screen mode, it's the currently active
     #                              region)
     #      "%{= Kk}"               - set the colorscheme to blac[k] on grey (bright blac[K]),
     #                                with no other effects (standout, underline, etc.)
     #    "%:"                    - otherwise ('%:' between a pair of '%?'s)
     #      "%{=u kR}"              - set the colorscheme to [R]ed on blac[k], and
     #                                underline it, but no other effects (bold, standout, etc.) 
     #    "%?"                    - end if (trailing '%?')
     #    "  %h "                 - print two spaces, tthne the [h]ardstatus of the
     #                              current tab window (as set by zsh - see zshrc) and
     #                              then another space.
     #    "%-024="                - either pad (with spaces) or truncate the previous
     #                              text so that the rest of the caption string starts
     #                              24 characters ('024') from the right ('-') edge of
     #                              the caption line.
     #                              NOTE: omitting the '0' before the '24' would pad
     #                              or truncate the text so it would be 24% from the
     #                              right.
     #    "%{+b}                  - add ('+') [b]old to the current text effects, but
     #                              don't change the current colors.
     #    " %C:%s%a %D %d %M %Y"  - print the [C]urrent time, a colon, the [s]econds,
     #                              whether it's [a]m or pm, the [D]ay name, the [d]ay
     #                              of the month, the [M]onth, and the [Y]ear.
     #                              (this takes up 24 characters, so the previous
     #                              pad/truncate command makes sure the clock doesn't
     #                              get pushed off of the caption line)
     #    "%{= dd}"               - revert to the [d]efault background and [d]efault
     #                              foreground colors, respectively, with no ('= ')
     #                              other effects.
     #  other things that might be useful later are
     #    " %H"                   - print a space, then the [H]ostname.
     #    "(%{.K}%l%{-}):"        - print a '(', then change the text color to grey
     #                              (aka bright blac[K]), and print the current system
     #                              [l]oad.  Then revert to the previous colorscheme
     #                              ('%{-}') and print a close ')' and a colon.
     #                              NOTE: the load is only updated when some other
     #                              portion of the caption string needs to be changed
     #                              (like the seconds in the clock, or if there were a
     #                              backtick command)
     #    "%0`"                   - put the output of a backtick command in the line
     #    "%-024<"                - don't pad, just truncate if the string is past 24
     #                              characters from the right edge
     #    "%-="                   - pad (with spaces) the previous text text so that
     #                              the rest of the caption string is justified
     #                              against the right edge of the screen.
     #                              NOTE: doesn't appear to truncate previous text.
     caption always           "%?%F%{= Kk}%:%{=u kR}%?  %h %-024=%{+b} %C%a %D %d %M %Y%{= db}"
     # use the hardstatus line for the window list
     #    alwayslastline      - always display the hardstatus as the last line of the
     #                          terminal
     #    "%{= kR} %-Lw"      - change to a blac[k] background with bright [R]ed text,
     #                          and print all the tab [w]indow numbers and titles in
     #                          the [L]ong format (ie with flags) upto ('-') the
     #                          current tab window
     #    "%{=b Kk} %n%f %t " - change to grey (bright blac[K]) background with
     #                          [b]old blac[k] text, with no other effects, and print
     #                          the [n]umber of the current tab window, any [f]lags it
     #                          might have, and the [t]itle of the current tab window
     #                          (as set by zsh - see zshrc).
     #                          NOTE: the color match with the caption line makes it
     #                          appear as if a 'tab' is dropping down from the caption
     #                          line, highlighting the number & title of the current
     #                          tab window.  Nifty, ain't it)
     #    "%{-}%+Lw "         - revert to the previous color scheme (red on black)
     #                          and print all the tab [w]indow numbers and titles in
     #                          the [L]ong format (ie with flags) after ('+') the
     #                          current tab window.
     #    "%=%{= dd}"         - pad all the way to the right (since there is no text
     #                          that follows this) and revert to the [d]efault
     #                          background and [d]efault foreground colors, with no
     #                          ('= ') other effects.
     hardstatus alwayslastline "%{= kR} %-Lw%{=b Kk} %n%f %t %{-}%+Lw %=%{= dd}"
    

    So here's my zshrc settings to tell screen about what I'm doing in each tab.

    # ~/.zshrc
    # if using GNU screen, let the zsh tell screen what the title and hardstatus
    # of the tab window should be.
    if [[ $TERM == "screen" ]]; then
      _GET_PATH='echo $PWD | sed "s/^\/Users\//~/;s/^~$USER/~/"'
    
      # use the current user as the prefix of the current tab title (since that's
      # fairly important, and I change it fairly often)
      TAB_TITLE_PREFIX='"`'$_GET_PATH' | sed "s:..*/::"`$PROMPT_CHAR"'
      # when at the shell prompt, show a truncated version of the current path (with
      # standard ~ replacement) as the rest of the title.
      TAB_TITLE_PROMPT='$SHELL:t'
      # when running a command, show the title of the command as the rest of the
      # title (truncate to drop the path to the command)
      TAB_TITLE_EXEC='$cmd[1]:t'
    
      # use the current path (with standard ~ replacement) in square brackets as the
      # prefix of the tab window hardstatus.
      TAB_HARDSTATUS_PREFIX='"[`'$_GET_PATH'`] "'
      # when at the shell prompt, use the shell name (truncated to remove the path to
      # the shell) as the rest of the title
      TAB_HARDSTATUS_PROMPT='$SHELL:t'
      # when running a command, show the command name and arguments as the rest of
      # the title
      TAB_HARDSTATUS_EXEC='$cmd'
    
      # tell GNU screen what the tab window title ($1) and the hardstatus($2) should be
      function screen_set()
      {
        # set the tab window title (%t) for screen
        print -nR $'\033k'$1$'\033'\\\
    
        # set hardstatus of tab window (%h) for screen
        print -nR $'\033]0;'$2$'\a'
      }
      # called by zsh before executing a command
      function preexec()
      {
        local -a cmd; cmd=(${(z)1}) # the command string
        eval "tab_title=$TAB_TITLE_PREFIX$TAB_TITLE_EXEC"
        eval "tab_hardstatus=$TAB_HARDSTATUS_PREFIX$TAB_HARDSTATUS_EXEC"
        screen_set $tab_title $tab_hardstatus
      }
      # called by zsh before showing the prompt
      function precmd()
      {
        eval "tab_title=$TAB_TITLE_PREFIX$TAB_TITLE_PROMPT"
        eval "tab_hardstatus=$TAB_HARDSTATUS_PREFIX$TAB_HARDSTATUS_PROMPT"
        screen_set $tab_title $tab_hardstatus
      }
    fi
    
    From rampion
  • The most useful screen customization, IMHO, is to change the modifier key to something other than C-a. That is just too important of a key to have eaten (go to the beginning of the line at all readline prompts, and in emacs). I use C-z, since I need to suspend applications a lot less often than I need to edit something at the beginning of the line.

    The magic word is:

    escape ^za
    
    Craig Sanders : i set mine to ctrl-K because it's the least commonly used ctrl key in the apps that i use. ^A is too useful in bash/readline to sacrifice.
    Hamish Downer : To check what you clash with you could consult http://superuser.com/questions/120333/what-are-the-common-control-combinations-in-a-terminal-setting (which I asked with this in mind).
    From jrockway
  • For those wanting a less cryptic way of getting a nice screen set up, I can heartily recommend byobu (formerly called screen profiles). It gives you a nice default set of stuff at the bottom of the screen - the bottom line contains various handy status information, and the second from bottom line contains a list of your screen windows. All this can be configured in a nice easy ncurses menu by pressing F9.

    The function keys are mapped to common operations:

    • F2 - create a new window
    • F3 - Go to the prev window
    • F4 - Go to the next window
    • F5 - Reload profile
    • F6 - Detach from the session
    • F7 - Enter scrollback mode
    • F8 - View all keybindings
    • F9 - Configure screen-profiles
    • F12 - Lock this terminal

    See this article for a tutorial and screenshots.

    Byobu is in the ubuntu repositories from karmic (9.10) onwards. In jaunty it was called screen-profiles. Before that it can be installed from this ppa of from this download page. It's widely packaged for other up-to-date distros aswell.

    It does depend on python, but once you have byobu set up as you like it, you can have it generate a tar ball containing all you need to recreate your screen on another computer using byobu-export.

    jtimberman : Screen-profiles is *awesome*. I was going to answer with the same.
  • I often have more than 10 windows running and wanted a way to select them. I found out how to configure C-a Shift+0 through 9 to select windows 10 through 19.

    bind  ! select 11
    bind  @ select 12
    bind \# select 13
    bind  $ select 14
    bind  % select 15
    bind \^ select 16
    bind  & select 17
    bind  * select 18
    bind  ( select 19
    bind  ) select 10
    

    Note the escapes on # and ^.

    From staticsan
  • The backtick command is pretty groovy. Read about it in man screen. I use it like so:

    backtick 1 15 15 $HOME/bin/cpuusage
    # now add '%1`%% CPU' to your hardstatus string. Result is like 38.4% CPU.
    

    My cpuusage script for Linux and Mac is:

    #!/bin/bash
    if [[ $(uname) == "Darwin" ]]; then
        top -i1 -l2 -n0|awk '/CPU/{i+=1; gsub(/%/,"",$0);p=substr(sprintf("%3.2f",$8+$10),0,4);if(i==2){printf "%g", p}}'
      else
        awk 'NR==1 {p=substr(sprintf("%3.2f", ($2+$3)/($2+$3+$4+$5)*100),0,4); printf "%g", p;}'</proc/stat
    fi
    
  • I have F11 and F12 set to cycle through windows, makes it quicker to move between windows, especially for windows > 10

    # Bind F11 and F12 (NOT F1 and F2) to previous and next screen window
    bindkey -k F1 prev
    bindkey -k F2 next
    
  • If you are using urxvt, the following will allow CTRL+LEFT and CTRL+RIGHT to be used to move to the previous and next tab window:

    bindkey "^[Od" prev  # ctrl-left
    bindkey "^[Oc" next  # ctrl-right
    

    Reconnecting to a remote screen session that should always be running or immediately created:

    bind V screen -t MYTABNAME ssh -t MYUSERNAME "screen -x main || screen -R -S main"
    

    Turning flow control off by default allows you to use CTRL+R in rtorrent properly:

    defflow off
    

    If running rtorrent as a daemon with its own user account, this .screenrc can be useful:

    vbell off
    startup_message off
    escape ^Rr
    screen -t rtorrent rtorrent
    multiuser on
    acladd YOURUSERNAME
    defflow off
    
    From Trey