Sunday, January 23, 2011

How to run Patch Program from CPanel?

I have a .diff file, that I want to apply to my website. But I don't know how to run patch program from cpanel.

Any ideas?

  • Download files, apply patch, upload files. Crude, but that's how it's done. You can't apply it on the server unless you have some sort of access to a shell.

How do I get squid peers to talk SSL to each other?

How would I set up a pair of squid proxies so that one uses the other as a parent and all traffic between them is encrypted using SSL? I've read the cache_peer documentation, but it's all very fuzzy to me which certs I need to create (and how), which server uses which cert, and so on. Is there a straightforward HOW-TO for this somewhere?

Just to be clear, I don't want to know how to setup squid to proxy https requests, or as a reverse proxy for a web server that uses https.

  • You don't specify the squid version, and the cache peer/parenting has changed a bit recently.

    Under squid 2.7 the client side should look a little like:

    cache_peer parent.fqdn parent SSL-PORT 0 ssl
    always_direct deny all
    never_direct allow all
    

    You may want client certs if you want to authenticate both sides, however that requires building a CA and even a simple one is painful.

    The server end there's more options.

    Eddy : Building & managing a CA is dead simple with tinyca2: http://tinyca.sm-zone.net/
    LapTop006 : @Eddy I disagree, I've built several CA's, and it's not the actual CA maintenance work that's the issue, it's everything else around it.
    From LapTop006

OS specific network delay, why?

Since my new ISP installed their own router to my house(I rented a room in a student house so don't have much control on these), I started having strange delays. Any outgoing connection I do, be it http or ssh, is delayed for several seconds, and once it is established, I have no further problems. I open several simultaneous tabs in my browser and after about 5 to 8 seconds, they all connect and load simultaneously and quiet fast. I can actually play online games once I connect.What's more interesting is, I experience this only with Linux distros, namely Arch and some versions of Ubuntu. Access with Windows installs are quiet normal. What might be wrong with my router? Everything was fine with my old router but I have to use this one now.

  • My first reaction is DNS. Check the DNS configuration of your linux systems so it's not hitting a DNS server that is down.

    From pehrs
  • Two likely things, first as perhrs says, DNS.

    Second, IPv6. There's a chance that if the ISP is academic related they could have IPv6 enabled, but broken.

    You could easily be seeing a combination of this as it's been known that many bad (consumer) routers drop AAAA (IPv6) DNS queries on the floor, Windows normally won't do IPv6 lookups unless it thinks it has a working IPv6 connection.

    A way to test this would be to try host ipv6.google.com (linux) or nslookup ipv6.google.com (windows). If these requests time out you have your culprit. To work around you could use third-party DNS servers (eg, OpenDNS, Google), or your ISP's DNS servers directly. You should also request a firmware update from the ISP to fix the issue.

    Atilla Filiz : host ipv6.google.com resolves the host. No problem there. Manually setting the DNS solved the problem but I have no idea why it worked.
    From LapTop006

Sysadmin 101: How can I figure out why my server crashes and monitor performance?

I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code.

For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal.

Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem.

So I need to figure out what the problem is myself. Questions:

  • When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right?

  • What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad.

  • Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down?

Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

  • Drupal can scale really well; talk to some webmasters in their community and you'll find people that are exceeding those numbers on a regular basis, so I can't say it's an inherent issue with Drupal. Couple of things come to mind though: do you have caching enabled? You sure it's not your database (MySQL/Postgres, etc.)? what kind of hardware is your site running on? Are there any other sites on it? Please provide more details; there's too many unknown variables right now.

    From gravyface
  • You didn't really provide much technical info, but one of the easiest and most effective optimizations for Drupal (and other big PHP applications) is using APC, memcache or similar.

    APC alone is really easy to setup, and very effective. Here are my settings that seem to work well with Drupal (in the php.ini file):

    extension=apc.so
    apc.apc.stat = 0
    apc.include_once_override = 1
    apc.shm_size = 90
    
    realpath_cache_size = 256K
    realpath_cache_ttl = 180
    

    The apc.shm_size size is most important (max MB of server memory used for .php filecache). Usually a lower size is enough, but if this cache is too small the cache will be almost useless. For most Drupal installations "50" would be enough. However if you have several active Drupal installations on the same server that are NOT multisites you need to set this even higher.

    If you're using APC, you need to make sure Zend Optimizer is off, they don't work well together. APC alone can do increase page load speed 30-40%. If the shm is set too low page load speeds dont increase.

    Also, I wonder if the guys doing the initial optimization actually knew Drupal and did Drupal optimizations or just general server stuff. You probably have these set, but to make sure you got proper settings on admin/settings/performance . That is:

    Caching mode: normal
    Page compression: enabled
    Optimize CSS files: enabled
    Optimize JavaScript files: enabled
    

    All of these are very effective.

    You're also probably using Views, which can be optimized in many ways, but each View can also have it's internal cache, and a cache-lifetime. If you caching your pages anyway and the users are mostly anonymous, it wont have a big effect though.

    There are many more ways to optimize (and you probably still should to learn the administration stuff). If the Drupal log at admin/reports/dblog doesn't show the errors you're looking for, for example most fatal errors and "white screen errors" never make it.

    • Apache crash info You should try to locate the apache and/or php logs for more info why it crashed. For example:

    locate error.log or locate php.log and the use the location to see the last log messages: sudo tail -n 100 /var/log/apache2/error.log <-- a sample path from my server When you find the error, google it.

    • Monitoring apache "top" isn't very userfriendly but it's fast and works on pretty much all UNIX machines. I usually use it to see if apache2 or mysql are choking.

    • Memory usage If the "devel"-module tells you a page load takes about 30M, it's pretty normal for a Drupal module with lots of modules. I have some installations using more (like 40M), but many also use less. My current project uses about 20M per normal pageview. De-activating unnecessary modules (or switching to more effective ones) is one way to lessen the memory usage.

    While in php.ini, also make sure 'memory_limit' is not too low. Drupal does use a lot of memory, and for example all image scaling operations are very memory heavy. The default is very low. In theory you'll install might work with 35M, but I would set it to at least the double to make sure all operations work. Some might disagree but I'm usually having it over 100M.

    If you want to do real hardcore Drupal optimizations, there are many guides, but this site is probably the most thorough: http://2bits.com/articles/drupal-performance-tuning-and-optimization-for-large-web-sites.html

    And yeah, if you're paying that much a month for hosting you should be able to hire an expert for an hour or so :).

    From Ilmari

limit linux background flush (dirty pages)

Background flushing in linux happens when either too much written data is pending (adjustable via /proc/sys/vm/dirty_background_ratio) or a timeout for pending writes is reached (/proc/sys/vm/dirty_expire_centisecs). Unless another limit is being hit (/proc/sys/vm/dirty_ratio), more written data may be cached. Further writes will block.

In theory, this should create a background process writing out dirty pages without disturbing other processes. In practice, it does disturb any process doing uncached reading or synchronous writing. Badly. This is because the background flush actually writes at 100% device speed and any other device requests at this time will be delayed (because all queues and write-caches on the road are filled).

Is there any way to limit the amount of requests per second the flushing process performs, or otherwise effectively prioritize other device I/O?

  • What is your average for Dirty in /proc/meminfo? This should not normally exceed your /proc/sys/vm/dirty_ratio. On a dedicated file server I have dirty_ratio set to a very high percentage of memory (90), as I will never exceed it. Your dirty_ration is too low, when you hit it, everything craps out, raise it.

    korkman : The problem is not processes being blocked when hitting dirty_ratio. I'm okay with that. But the "background" process writing out dirty data to the disks fills up queues without mercy and kills IOPS performance. It's called IO starvation I think. In fact, setting dirty_ratio_bytes extremely low (like 1 MB) helps alot, because flushing will occur almost immediately and queues will be kept empty. Drawback is possibly lower throughput for sequential, but that's okay.
    Luke : You turned off all elevators? What else did you tweak from a vanilla system?
    korkman : See my self-answer. The end of the story was to remove dirty caching and leave that part to the HW controller. Elevators are kinda irrelevant with HW write-cache in place. The controller has it's own elevator algorithms so having any elevator in software only adds overhead.
    From Luke
  • After lots of benchmarking with sysbench, I come to this conclusion:

    To survive (performance-wise) a situation where

    • an evil copy process floods dirty pages
    • and hardware write-cache is present (possibly also without that)
    • and synchronous reads or writes per second (IOPS) are critical

    just dump all elevators, queues and dirty page caches. The correct place for dirty pages is in the RAM of that hardware write-cache.

    Adjust dirty_ratio (or new dirty_bytes) as low as possible, but keep an eye on sequential throughput. In my particular case, 15 MB were optimum (echo 15000000 > dirty_bytes).

    This is more a hack than a solution because gigabytes of RAM are now used for read caching only instead of dirty cache. For dirty cache to work out well in this situation, linux kernel background flusher would need to average at what speed the underlying device accepts requests and adjust background flushing accordingly. Not easy.


    Specs and benchmarks for comparison:

    Tested while dd'ing zeros to disk, sysbench showed huge success, boosting 10 threads fsync writes at 16 kb from 33 to 700 IOPS (idle limit: 1500 IOPS) and single thread from 8 to 400 IOPS.

    Without load, IOPS were unaffected (~1500) and throughput slightly reduced (from 251 MB/s to 216 MB/s).

    dd call:

    dd if=/dev/zero of=dumpfile bs=1024 count=20485672
    

    for sysbench, the test_file.0 was prepared to be unsparse with:

    dd if=/dev/zero of=test_file.0 bs=1024 count=10485672
    

    sysbench call for 10 threads:

    sysbench --test=fileio --file-num=1 --num-threads=10 --file-total-size=10G --file-fsync-all=on --file-test-mode=rndwr --max-time=30 --file-block-size=16384 --max-requests=0 run
    

    sysbench call for 1 thread:

    sysbench --test=fileio --file-num=1 --num-threads=1 --file-total-size=10G --file-fsync-all=on --file-test-mode=rndwr --max-time=30 --file-block-size=16384 --max-requests=0 run
    

    Smaller block sizes showed even more drastic numbers.

    --file-block-size=4096 with 1 GB dirty_bytes:

    sysbench 0.4.12:  multi-threaded system evaluation benchmark
    
    Running the test with following options:
    Number of threads: 1
    
    Extra file open flags: 0
    1 files, 10Gb each
    10Gb total file size
    Block size 4Kb
    Number of random requests for random IO: 0
    Read/Write ratio for combined random IO test: 1.50
    Calling fsync() after each write operation.
    Using synchronous I/O mode
    Doing random write test
    Threads started!
    Time limit exceeded, exiting...
    Done.
    
    Operations performed:  0 Read, 30 Write, 30 Other = 60 Total
    Read 0b  Written 120Kb  Total transferred 120Kb  (3.939Kb/sec)
          0.98 Requests/sec executed
    
    Test execution summary:
          total time:                          30.4642s
          total number of events:              30
          total time taken by event execution: 30.4639
          per-request statistics:
               min:                                 94.36ms
               avg:                               1015.46ms
               max:                               1591.95ms
               approx.  95 percentile:            1591.30ms
    
    Threads fairness:
          events (avg/stddev):           30.0000/0.00
          execution time (avg/stddev):   30.4639/0.00
    

    --file-block-size=4096 with 15 MB dirty_bytes:

    sysbench 0.4.12:  multi-threaded system evaluation benchmark
    
    Running the test with following options:
    Number of threads: 1
    
    Extra file open flags: 0
    1 files, 10Gb each
    10Gb total file size
    Block size 4Kb
    Number of random requests for random IO: 0
    Read/Write ratio for combined random IO test: 1.50
    Calling fsync() after each write operation.
    Using synchronous I/O mode
    Doing random write test
    Threads started!
    Time limit exceeded, exiting...
    Done.
    
    Operations performed:  0 Read, 13524 Write, 13524 Other = 27048 Total
    Read 0b  Written 52.828Mb  Total transferred 52.828Mb  (1.7608Mb/sec)
        450.75 Requests/sec executed
    
    Test execution summary:
          total time:                          30.0032s
          total number of events:              13524
          total time taken by event execution: 29.9921
          per-request statistics:
               min:                                  0.10ms
               avg:                                  2.22ms
               max:                                145.75ms
               approx.  95 percentile:              12.35ms
    
    Threads fairness:
          events (avg/stddev):           13524.0000/0.00
          execution time (avg/stddev):   29.9921/0.00
    

    --file-block-size=4096 with 15 MB dirty_bytes on idle system:

    sysbench 0.4.12: multi-threaded system evaluation benchmark

    Running the test with following options:
    Number of threads: 1
    
    Extra file open flags: 0
    1 files, 10Gb each
    10Gb total file size
    Block size 4Kb
    Number of random requests for random IO: 0
    Read/Write ratio for combined random IO test: 1.50
    Calling fsync() after each write operation.
    Using synchronous I/O mode
    Doing random write test
    Threads started!
    Time limit exceeded, exiting...
    Done.
    
    Operations performed:  0 Read, 43801 Write, 43801 Other = 87602 Total
    Read 0b  Written 171.1Mb  Total transferred 171.1Mb  (5.7032Mb/sec)
     1460.02 Requests/sec executed
    
    Test execution summary:
          total time:                          30.0004s
          total number of events:              43801
          total time taken by event execution: 29.9662
          per-request statistics:
               min:                                  0.10ms
               avg:                                  0.68ms
               max:                                275.50ms
               approx.  95 percentile:               3.28ms
    
    Threads fairness:
          events (avg/stddev):           43801.0000/0.00
          execution time (avg/stddev):   29.9662/0.00
    

    Test-System:

    • Adaptec 5405Z (that's 512 MB write-cache with protection)
    • Intel Xeon L5520
    • 6 GiB RAM @ 1066 MHz
    • Motherboard Supermicro X8DTN (5520 Chipset)
    • 12 seagate barracuda 1 TB disks
      • 10 in linux software raid 10
    • kernel 2.6.32
    • filesystem xfs
    • debian unstable

    In sum, I am now sure this configuration will perform well in idle, high load and even full load situations for database traffic that otherwise would have been starved by sequential traffic. Sequential throughput is higher than two gigabit links can deliver anyway, so no problem reducing it a bit.

    Thanks for reading and comments!

    From korkman

Apache: Limit the Number of Requests/Traffic per IP?

I would like to only allow one IP to use up to, say 1GB, of traffic per day, and if that limit is exceeded, all requests from that IP are then dropped until the next day. However, a more simple solution where the connection is dropped after a certain amount of requests would suffice.

Is there already some sort of module that can do this? Or perhaps I can achieve this through something like iptables?

Thanks

  • If you want a pure Apache solution bw_mod for Apache 2.0 and mod_bandwidth for Apache 1.3. They can throttle the bandwidth of your server to limit bandwidth usage.

    There is also mod_limitipconn, which prevents one user from making lots of connections to your server. mod_cband is another option, but I have never used it.

    If you don't want to mess with your Apache installation you can put a squid proxy in front of Apache. It gives you more control also over the throttling.

    However, in most cases the problem is a few large objects when you want to limit bandwidth per IP, and you want to give a sane error message when a user pulls too much data and you block him. In that case it might be easier to write a PHP script and store the access information in a temporary table in a database.

    pehrs : Have you set your robots.txt to disallow spiders?
    packs : The problem with robots.txt, is that (much like the RFC 3514) only nice robots respect it.
    pehrs : True, but you will find that the majority of the people spidering your site uses standard tools. And many of them, like wget, respects robots.txt. Robots.txt is also the correct way to inform your users you don't want them to spider.
    pehrs : Squid is probably the next step. That or simply banning them. If they bypass robots.txt I don't see any reason to service them.
    From pehrs

Hyper-V: virtual 32 bit OS on 64bit OS

Hi. Is it possible to run 32-bit virtual OS(Windows Server 2003) on 64-bit machine with Windows Server 2008 R2 x64 Standard isnstalled?

How to add authors page, authors profile and pics on Wordpress?

Hello,

I am looking for a plugin that would add the folloiwng on our blog site built on Wordpess slef hosted:

  1. Authors' name to appear on every blog (below title where currentlly date and tags appear)
  2. Authors page being added to our blog site
  3. On authors page display authors' profile including bio, website, social network profile buttons and published posts.

So it would be more like how it is set-up at Mashable: http://mashable.com/author/pete-cashmore/.

Is there a plugin or plugins I could use to achieve these?

Thank for help.

  • Use

    like

    This post was written by

    and use custom permalinks

    like /%postname%/ so you will get the author page

    domain.com/author/name

    April : Thanks but your suggestion doesnt guide me through where to add/modify. I was hoping there was a plugin that could do all of these.
    From Athul

Centos: multiple IP addresses

I just bought 10 extra IPs from my host. How do I point these at my CentOS 5 server?

  • Use IP aliasing on the host.

    ifconfig eth0:1 192.168.0.100 netmask 255.255.255.0
    ifconfig eth0:2 192.168.0.101 netmask 255.255.255.0
    

    To do it permanently you will have to use the files in /etc/sysconfig/network-scripts/ifcfg-<interface>

    And so on. But why do you need multiple IPs for a single host? SSL/TLS?

    pehrs : Check the answer from PowerSp00n for how to setup the ifcfg files.
    From pehrs
  • On CentOS you can use a ifcfg-eth?-range? file to assign multiple IP adresses. For example; you want to assign additional IP adresses to your eth0 interface and you don't have any additional addresses assigned yet. Create the file /etc/sysconfig/network-scripts/ifcfg-eth0-range0:

    IPADDR_START=10.0.0.10
    IPADDR_END=10.0.0.19
    CLONENUM_START=0
    

    When you already have assigned additional IP addresses the CLONENUM_START value should match the next available eth0:x number.

    If the IP addresses aren't in order you have to create an ifcfg-eth0:x file for each of the addresses. The content should look like this:

    DEVICE=eth0:0
    IPADDR=10.0.0.10
    NETMASK=255.255.255.0
    ONBOOT=yes
    

    Change the DEVICE value to the corresponding filename and run ifup eth0:0 to bring the interface online.

    From PowerSp00n

Problem with host-only in Virtual Box

Hi folks
I have virtualbox software in my computer and I have windows 7 in host. my virtual has windows server 2008 and Sql server 2008 I have client SQL version in windows 7(host computer) but as you know virtualbox works in host-only network. I need Connection host to guest but I can not see virtual computer(win2008) from host (win7). Is there any solution ?

  • VirtualBox let's you choose several modes for the VM virtual NIC. The mode that I seem to have the most success with is "bridged networking". Try changing this setting on you VM NIC and seeing if it works. If it still doesn't work, try deleting the virtual NIC, and then re-creating it.

    If that doesn't fix it, try creating a new VM, and making the current virtual hard disk the drive for the new VM. Sometimes the VM configuration file becomes corrupted in some way (not the hard disk, just the configuration file), and creating a new VM fixes the problem.

SQL server environment

Hello I'm considering a bit of changes in current sales environment. And trying to check all cons and pros. Current situation. SQL server (quite decent HP server - server1) + backup server (smaller Dell server - server2). all sql files and sql server itself are on the server1. If something goes wrong with server1 I will have to manually move to server2. Connecting to the sql server: 1 HQ (where server located) + 4 sites through VPN.

Now I'm considering 2 scenarios:

  1. Buy some storage system + update existing servers (add ram, upgrade processors) and go for VMWare ESXI.
  2. Rent a server at a datacenter + rent virtual server in case real server goes down. Also rent some space at data storage to keep SQL files there.

Have anyone considered these things and maybe found some good pros/cons list? ;)

Thanks

  • Well, it's always up to you - these variants will just have different price and performance, and I can't say which one is better based on provided information.

    For #2 you might have "disabled" virtual server (cloud things) - so that you are not paying for it when you don't need it. You may combine this option with your first variant.

  • What are your requirements?

    • Are you looking for automatic failover to the backup server or is the manual process okay?
    • How much storage is required?
    • What is the concurrent usage?
    • If HQ could not connect to the SQL Server because the remote link is down, how does that affect your productivity?

    How old are your current servers and what is their warranty status? Are they within consideration to migrate to ESX servers?

    • Manual is ok also if getting automatic will be costly
    • Current database is about 17GB (both data file and transaction file)
    • In current situation if HQ has problems with connection, then other sites are also down. One of the pros of using server in Datacenter is that usually providers react more quickly to the problems in datacenter then to the problems with separate customer :)

    Main server is almost 3 years old with standard HP warranty.

I want my logs sent to my mail with logrotate

Not strictly a question about programming as such, more of a log handling question.

Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me.

Now, another prerequisite is that they're hilighted by simple HTML.

All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example:

/var/log/a.log /var/log/b.log {
  daily
  missingok
  copytruncate

  prerotate
    /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer -fnoreply@client.com me@mydomain.com
  endscript
}

The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.)

Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

  • hi,

    try to replace prerotate with firstaction. this way your mail command will only be executed one time for all logs defined in a block.

    this is the text from the man page:

    firstaction/endscript
              The lines between firstaction and endscript (both of which must appear on lines by themselves) are executed once before all log files that match the
              wildcarded pattern are rotated, before prerotate script is run and only if at least one log will actually be  rotated.  These  directives  may  only
              appear inside a log file definition. If the script exits with error, no further processing is done. See also lastaction.
    
    lericson : This I will definitely try. Thanks a million!
    From Christian

varnish demon error: libvarnish.so.1 not found

In order to try out varnish for an upcoming project I installed it on an ubuntu server using this tutorial: http://varnish-cache.org/wiki/InstallationOnUbuntuDapper

The build process worked without any errors, but I cant start the varnish demon. I always get the error message

varnishd: error while loading shared libraries: libvarnish.so.1: cannot open shared object file: No such file or directory

But /usr/local/lib/libvarnish.so.1 clearly exists.

How can I tell varnish to look in that directory and load the library?

UPDATE

To answer the questions of cd34

    ldd `which varnishd`

outputs:

    linux-vdso.so.2 =>  (0x00007fff0a360000)
    libvarnish.so.1 => not found
    libvarnishcompat.so.1 => not found
    libvcl.so.1 => /usr/local/lib/libvcl.so.1 (0x00007f2a6fcaf000)
    libdl.so.2 => /lib/libdl.so.2 (0x00007f2a6faab000)
    libpthread.so.0 => /lib/libpthread.so.0 (0x00007f2a6f88f000)
    libnsl.so.1 => /lib/libnsl.so.1 (0x00007f2a6f675000)
    libm.so.6 => /lib/libm.so.6 (0x00007f2a6f3f1000)
    libc.so.6 => /lib/libc.so.6 (0x00007f2a6f082000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f2a6fec7000)

Do you have varnish in two places on the machine, one from a previous attempt?
-> No, varnish is only installed once

Are you using 2.1.0 which was recently released?
Yes, I am using the most recent version

  • ldd `which varnishd`
    

    Where is varnish looking for the libraries? Do you have varnish in two places on the machine, one from a previous attempt? Did you specify any directory paths when you did ./configure ?

    Are you using 2.1.0 which was recently released?

    Max : Good questions and thanks for your answer, please see my edit.
  • Try running sudo ldconfig to rebuild the library cache.

    Max : So easy! That did it, thank you very much!
    From rossnz
  • Try running sudo ldconfig to rebuild the library cache.

    helpt me to thankz...

Snow Leopard Compatible Drivers for Moschip MCS7720 USB-to-Serial Controller

We are using Cables Unlimited USB-2925 USB-to-Dual-DB9 serial cables, which use the Moschip MCS7720 controller. We have downloaded the newest driver from http://www.moschip.com/mcs7720.php, but that driver was last updated in 2005. It does not seem to be working with Macs running OS X 10.6 Snow Leopard.

We have sent emails to the support addresses for both Cables Unlimited and Moschip. Cables Unlimited says they are checking with Moschip. No responses from Moschip yet.

Does anyone know of any updates for this driver, or are there any ways to get the driver to work with Snow Leopard?

  • Unfortunately, I don't have a suggestion for drivers for the controller you're currently using. I've always found the Keyspan High Speed USB Serial Adapter (USA-19HS) to be an excellent product for my USB-to-Serial needs and it is well supported under Mac OS X (including Snow Leopard, although I haven't tested this yet).

    Kristopher Johnson : Prolific-based USB-to-serial adapters work under Snow Leopard, using the open-source driver from http://sourceforge.net/projects/osx-pl2303/
    From morgant
  • There is a solution to force this exact cable to work with Snow Leopard while waiting for Moschip to update their drivers.

    Be very careful while you are making this change. You will need to execute these commands as the root user.

    1. Install the latest driver from Moschip and reboot.

    2. Locate the file /System/Library/Extensions/MCS7720Driver.kext/Contents/Info.plist

    3. Open the file and locate the line that reads <integer>30496</integer>

    4. Change the number from 30496 to 30485. Save the file.

    5. Execute the command "touch /System/Library/Extensions".

    6. Wait a few minutes and plug in your USB cable. You should see a screen pop up that alerts you that two new network interfaces have become available. You'll note that they're named /dev/tty.USB-Serial0.0 and /dev/tty.USB-Serial1.1.

    7. You're good to go. If the software is ever updated by Moschip, it will likely overwrite this change.

  • There are updated drivers dated 22 Jan 2010. I've just installed them, but haven't yet rebooted and tested them. They require registration to download from MosChip's site:

    http://www.moschip.com/mcs7720_downloads.php

Failed pinging a LAN card of the server from the client using shared internet connection

The server (Windows XP Pro SP3) has two LAN cards (LAN card A and B) and is connected to the internet using ADSL. The ADSL connection is shared to LAN card B using Internet Connection Sharing.

The client (Windows XP Pro SP3) has one LAN card, and is connected to LAN card B of the server so that it has access to the internet.

The IP address on the LAN cards are defined as follows:

Server:
 LAN card A: 192.168.0.3/24 (manually defined by me)
 LAN card B: 192.168.0.1/24 (manually defined by Internet Connection Sharing)

Client:
 LAN card: 192.168.0.123/24 (assigned by DHCP) Default gateway: 192.168.0.1

From the server, I can ping 192.168.0.123 successfully.

From the client, it can access the internet without any problem. I can also ping 192.168.0.1 successfully but for 192.168.0.3, it failed with the Request Timeout error message.

Why did the ping fail, and what should be done to make the ping possible? (all firewalls have been turned off.)

  • To start with, you should not have two identical subnets on multiple disjointed networks. The reason is that the Windows sees that both cards have access to the entire 192.168.0.x network - when in fact, they don't.

    What will be happening is that when you ping 192.168.0.3 from the client machine, the machine will be sending the ping response back on LAN A interface, rather than the LAN B interface (because it doesn't know any better).

    You will need to ensure that one of your networks is different. You should have, say 192.168.0.x and 192.168.1.x (both with a subnet mask of 255.255.255.0) for this to be a correct network setup.

    bobo : Yes, you are right. Following your solution, the ping works now!
    From Farseeker

Apache log with Munin question

In Munin Graph:

What is the meaning of 'apache accesses' and 'apache processes'? And what's the relation between them?

  • Processes: The prefork MPM uses multiple child processes, each child handles one connection at a time.

    Accesses: A total number of accesses

    From rkthkr

Bridged virtual interface is not available or visible to ifconfig.

Hello all.

I'm running Ubuntu 9.04, kernel 2.6.28-18, and vmware-server 2.0.1.

I'm attempting to setup a virtual linux machine to use a bridged interface rather than NAT or host-only. Both NAT and host-only work just fine. When running vmware-config.pl, I set /dev/vmnet0 to bridge eth0, /dev/vmnet1 to host-only, and /dev/vmnet8 to NAT.

When I run ifconfig -a I see the physical interface (eth0), vmnet1 and vmnet8 both of which are up and have IP addresses assigned to them. I also see other various interfaces that are not relevant here.

In the web console, when I ask that the guest machine's network card be bridged, it states that a bridged setup is "Not available" and shows the disabled device icon. Inside the guest machine, I do have an eth0 interface which I can set to anything I like, however it can't see my external network, or the host.

I do see errors in my vmware/hostd.log which state: "The network bridge on device vmnet0 is not running. The virtual machine will not be able to communicate with the host or with other machines on your network" which confirms the problem.
vmnet-bridge is running, and I see the following in my process table:

/usr/bin/vmnet-bridge -d /var/run/vmnet-bridge-0.pid -n 0 -i eth0

I confirm that the /var/run/vmnet-bridge-0.pid file is there and that it points to the correct process.

I saw this question relating to Ubuntu 9.04 and bridged interfaces, in which the poster determined that the vsock library was not getting built due to a flaw in the vmware-config.pl script. I applied the patch, reran the script, and confirm that vsock.ko and vsock.o are in my /lib directory structure. vsock does show up in an lsmod.

My /etc/vmware directory has /vmnet1 and /vmnet8 subdirectories. They contain configuration utilities for running DHCP and nat type services as expected. There is no vmnet0 subdirectory. My /etc/vmware/netmap.conf file DOES show entries for vmnet0; both the name and the device as I configured it from the script.

My /dev directory contains devices vmnet0 through vmnet9. They have major device number 119, and minor device numbers 0 through 9. /proc/net/dev shows statistics for vmnet1 and vmnet8, but not vmnet0. I have a /proc/vmnet directory, but it's empty.

When I start or stop the vmware service with /etc/init.d/vmware start, I see the following:

Starting VMware services:
   Virtual machine monitor                                             done
   Virtual machine communication interface                             done
   VM communication interface socket family:                           done
   Virtual ethernet                                                    done
   Bridged networking on /dev/vmnet0                                   done
   Host-only networking on /dev/vmnet1 (background)                    done
   DHCP server on /dev/vmnet1                                          done
   Host-only networking on /dev/vmnet8 (background)                    done
   DHCP server on /dev/vmnet8                                          done
   NAT service on /dev/vmnet8                                          done
   VMware Server Authentication Daemon (background)                    done
   Shared Memory Available                                             done
Starting VMware management services:
   VMware Server Host Agent (background)                               done
   VMware Virtual Infrastructure Web Access
Starting VMware autostart virtual machines:
   Virtual machines                                                    done

Nothing appears to be wrong there.

What n00b thing am I doing such that vmnet0 and only vmnet0 does not show up in the interface list?

  • D'oh! Right after I post this, I find the answer. Apparently vmnet-netifup wasn't running for vmnet0. Once I ran:

    `/usr/bin/vmnet-netifup -d /var/run/vmnet-netifup-vmnet0.pid /dev/vmnet0 vmnet0`
    

    it worked fine. Now why didn't it automatically start when the other two did? That's an open question still.

    From Omniwombat
  • Thx for your post, at least I can start vmnet0 manually now. Still no idea why it doesn't come up automatically...

    From bluebeard

VMware Workstation on Linux: Dropping core files in a shared folder...

I'm using VMware 6.0.2 on a RHEL 4.6 host. The VMs are MontaVista CGE 5.0 (2.6.21 kernel). I'm trying to get applications running in the VMs to drop any core files on a HGFS volume, i.e. in a "shared folder". The core files get created as per the path and format given in /proc/sys/kernel/core_pattern, but they are always zero length. If I change the path to a local path (on a virtual disk in the VM), all is well.

Any ideas what I have to do get the core files written into a shared folder?

Thanks for your help!

  • I've confirmed the issue over here. I don't know why Linux refuses to dump core contents to an HGFS share (Arch Linux kernel 2.6.32 with open-vm-tools 2010.01.19 here), but I do have a solution.

    Linux 2.6.19 and higher will let you pipe core dumps through an arbitrary program, so create a shell script that copies its stdin to a file on your HGFS share, e.g.:

    #!/bin/sh
    
    # Where do you want the core to go?
    COREFILE=/mnt/hgfs/vmshare/core
    
    tee $COREFILE >/dev/null
    

    Of course you may wish to implement some logic for $COREFILE so that each subsequent core dump doesn't just overwrite the last.

    Save your script as /usr/local/bin/core.sh, then set the file's executable bit and configure core_pattern as follows:

    # chmod +x /usr/local/bin/core.sh
    # sysctl -w kernel.core_pattern='|/usr/local/bin/core.sh'
    

    Linux will pipe any core dumps through your shell script, which won't have any problem writing to the HGFS share itself.

    If you're wondering, you can't simply put the tee command directly in kernel.core_pattern, because in kernels older than 2.6.24 you can't specify arguments to a pipe command with this sysctl. For the same reason, unfortunately I can't think of a good way for you to incorporate the core_pattern template specifiers into your core dump file names using this method, if you're tied to kernel 2.6.21.

    : +1 This is at least a step in the right direction! But, as you suspected, I have to use a core_pattern template w/ specifiers. Let me see if I can work with this approach. Thanks for your answer!
    From Niten

networking router source of packet loss and transmission delays

Why are router source of packet loss and transmission delays?

  • Consider you have 2 hosts sending data as fast as they can to a third host, they all have 100Mbps connections. Which means the 2 hosts tries to fit 200Mbs onto the 100Mbs of the third host.

    What's the switch/router to do when its internal buffers are full - it has no other option besides to drop/reject further incoming packets, or drop packets based on some prioritizing criteria.

    Regarding delays, the more packets queued up for sending out to a host, the longer delays you get. Packets need to be sent serially, if 2 hosts tries to communicate with the same third host, they can't be sent out in parallell, so one of the packets have to wait until the first one is done transmitting.

    There's some internal delay as well, as the packet have to be read, and perhaps a route need to be looked up to figure out which port to send the packet to, or apply packet filtering etc. Exactly what and how this is done is very dependant on the internals of the router/switch ofcourse.

Exchange 2003 -- Mailbox Management not deleting ALL messages aged 30 days or older...

I've recently created a Mailbox Management task within Exchange 2003 that, every night, looks at the contents of the Deleted Items within a particular mailbox and deletes mail that's 30 days or older.

The scheduled task ran on its own last night and I have confirmed that messages within the right mailbox and the right folder were, in fact, processed. Many mails were deleted ... but not never email older than 30 days. In fact, the choice seems kinda random.

Last night 3/10/2010 was the 30 day watermark. Mails were deleted from 3/10/2010, sure enough, but not all of them. Mails older than 3/10/2010 were deleted as well, but, again, not all of them.

The only criteria I have on the management -- aside from the single mailbox and single folder scopes -- is the age criteria. The size criteria is set to Any, meaning I don't care about the size. I care about the age.

It's made me wonder where there is some sort of limit on how many mails can be processed?

The schedule is set for 12am and 1am every night.

Any hints appreciated.

EDIT: Here are pics.

alt text alt text

And here's an example of one of the reports:

    The Microsoft Exchange Server Mailbox Manager has completed processing mailboxes
Started at: 2010-04-10 22:52:01
Completed at:   2010-04-10 22:52:10
Mailboxes processed:    1
Messages moved: 0
Size of moved messages: 0.00 KB
Deleted messages:   114
Size of deleted messages:   4.41 MB

That report up there is from a MANUAL run of the Mailbox Management Process. If I run it again I get another report stating that nothing was deleted.

    1. Do you have it set to only delete mail that's been backed up? If so, is your back doing a full backup on a regular basis?

    2. How long do you let it run, and how large is/are your database(s)? If you only let it run for 30 min and you have a 50GB database, that's not going to cut it. I've got around 25GB and it takes about 4 hours to do all the maintenance (not exactly the newest server either).

    tcv : I am not sure where I would set it to delete mail that's already been backed up. That doesn't seem like an option in the dialogues I've seen. As to No. 2: The schedule is set to start at 12am and stop at 1am. I have the task set to send a report to me. In that time, I got 4 reports, all processing that one mailbox and only the FIRST report showed any actual deletes. So, I am little lost on how to answer your question.
    tcv : Oh, my database is 30gb. But it seemed like the Management process ran for so many minutes, stopped, sent me report, then ran again, stopped, sent me a report and repeated about four times until 1am. I am scoping this down to 1 mailbox.
    From Chris S
  • I found the answer here: http://msexchangeteam.com/archive/2004/08/17/215807.aspx

    This led me to this MSKB Article: http://support.microsoft.com/?id=326397

    "3" is the behavior I want. I set it and KABLAMMO all the mails I wanted gone are gone.

    From tcv

Linux: Managing users, groups and applications

I am fairly new to linux admin so this may sound quite a noob question.

I have a VPS account with a root access.

I need to install Tomcat and Java on it and later other open source applications as well.
Installation for all of these is as simple as unzipping the .gz in a folder.

My questions are

A) Where should I keep all these programs?
In Windows, I typically have a folder called programs under c:\ where I unzip all applications.
I plan to have something similar here as well.
Currently, I have all these under apps folder under/root- which I am guessing is a bad idea. http://serverfault.com/questions/57962/whats-wrong-with-always-being-root
Right now I am planning to put them under /opt

B) To what group should Tom belong to ?
I would need a user - say Tom who can simply execute these programs.
Do I need to create a new group? or just add Tom to some existing group ?

C) Finally- Am I doing something really stupid by installing all these application by simply unzipping them?
I mean an alternate way would be to use Yum or RPM or something like that to install these applications.
Given my familiarity and (tight budget) that seems too much to me.
I feel uncomfortable running commands which i don't understand too well

  • A) Read the Filesystem Hierarchy Standard.

    B) Tom should not be running these programs. They should be run by root, in the background.

    C) Yes. Packages for a distro are tuned to work efficiently within the distro and with other packages in the distro.

    RUTE

    RHEL documentation

    Money? CentOS.

    RN : I thought it was a bad idea to start programs like Tomcat as a root http://serverfault.com/questions/57962/whats-wrong-with-always-being-root
    Ignacio Vazquez-Abrams : Most programs either drop privileges after starting, or are actually run via wrappers that force it to be run as another user. But in either case they are started *by* root, even though they don't run *as* root.
  • Learn to use your package manager. They are good, they will do things right more often than not. Windows Doesn't have a sane package manager. By using your package manager it can tell you when security updates become available. It allows for easy removal. Other people who are familiar with this distribution will be familiar with the locations things have been installed to. You will be more able to use your distro's online documentation and community. These will be less available if you do everything yourself. Only do it the manual way if your distro doesn't provide what you need (and even then I'd recommend learning to package it yourself and still using the package manager).

Locking User account created under Windows Authentication in SQL Server

Hi,

As per my project requirement i need to lock a user in SQL Server(Which is created using Windows Authentication). Is there any way to do this?

Thanks for the help

Santhosh

  • I don't think this will be directly possible in SQL Server, but you could:

    • In SQL Server: Remove all rights from the user (including the ability to connect).
    • Disable the account in Windows.

    As the account is a Windows account, it is up to Windows to lock it.

    K. Brian Kelley : -1 because it is possible to do in SQL Server.
    Richard : @K.Brian: In what way is the first bullet point different to your own answer?
    From Richard
  • How to do this depends on version. I am assuming the Windows user is added explicitly and not through a Windows group.

    In SQL Server 2000, if you are using Enterprise Manager, bring up the properties for the Windows user login. On the General tab you can select Deny Access under Authentication and this will prevent the Windows user from connecting to the SQL Server.

    In SQL Server 2005/2008, there are two ways to do this. Using SQL Server Management Studio, again bring up the properties ofr the Windows user login. Click on the Status page. You can either Deny permission to connect to the database engine or Disable the login or both.