Wednesday, January 19, 2011

Really removing CVS files from a module

Someone in my development team committed a huge intermediate file into our CVS repository (run as Client-Server configuration where I have no server-side access).

I have "cvs removed" the file, but the existence of this file seriously slows down working with that module.

The file was never needed, if I request that the FILENAME.EXT,v is deleted (e.g. the comma-V file on the server), would that be enough to wipe all existence of the file from the module?

I'm looking for a simple solution ideally not running the CVS tool as our IT department are unable to use it (and complex requests frequently get ignored).

  • If you request that file.txt,v is deleted from the filesystem, you should be ok - clients operating in that module may get strange errors if they didn't run 'cvs update' since the file was cvs rm'd.

    Either have your users remove the module and check out the module again or get them to edit CVS/Entries and remove any mention of the file in question.

    Ray Hayes : Thought so, thanks.
    From Alex Holst

OVF Tools error

Hello all,

I am trying to use OVF tools 1.0 to create an .ovf appliance split to 4gbs. When I run the following command:

ovftool --chunkSize=4gb vi://:@/?ds= e:\test\test.ovf

everything starts off fine. It will got for about 15-16% then I get this message:

"Error: unable to get NFC ticket for target disk".

I have looked online, but cannot find anything that matches this problem. I am using Windows 2k3 as the ovf creation server, and I'm in a ESX 3.5 environment. The ovf must be split into 4gb chunks (or less) so they can be put onto DVDs. Also, when it is done it deletes the files it started to create. Any help would be greatly appreciated.

Vmware Server 2.02 on Ubuntu 9.04 (64-bit) - Performance problems with guest OS/servers!

Hello all,

Hope someone here can help out as I'm pulling out my hair trying to find a way to get some decent performance from my VM box. I'm running Ubuntu 9.04 64-bit with an AMD 4 Core (phenom) and 4Gb or RAM. I also have another system that is running Ubuntu 9.04 32-bit (a notebook) with 2Gb or RAM. I can run an image on the slower/older dual core notebook lightning fast and take that same image and run it on my VM server and it runs noticeably slower. To make matters worst, when I run 2 server images that I have (that I need to run with some decent performance level) they come up very slowly and can almost cause the host OS to come to a halt/run very slow. If I look at the Ubuntu perfomance monitor it doesn't show a huge load on the CPUs or more than 55-65% RAM usage but it still runs like it's about to die. So... Here are my questions:

  1. Are there any know issues with the setup I have that would cause such bad performance?
  2. Should I be running something other than VMware 2.02?
  3. Should I be running some other host OS?
  4. Is there any way to change/modify settings some place to fix this?

Thanks in advice.

  • I recently installed 2.01 on CentOS 5.3 and wasn't very happy with its stability and management capability. I upgraded the OS to 5.4 and it got worse. Upgradinging to 2.02 didn't help. It got to the point that guests would randomly crash and most of the time I couldn't start them. I went back to 1.10 (my standard is 1.09) and have had no trouble at all on the exact same hardware and OS.

  • VMware Server 2 dumped its ability to be managed via the VMware Server Console application in favour of a Tomcat servlet, causing its disk space, CPU, and memory usage to balloon compared to version 1. Whether it's a plot to encourage users to shell out for VMware ESX or just poor judgement, I'd recommend reverting to VMware Server 1.10.

    From Eric3

Access mails on server after moving mail DNS

I've just moved mail for a domain over to Google Apps, from my web host. All is well, except for some mails in one account left on my host. I don't want to reverse all my dns changes just to retrieve these mails if I can get away without it.

My interface with my host is through Plesk, on a Virtuozzo instance. I'm asking here because I'm unable to contact my hosting provider.

  • To switch all your traffic over to Google Apps, the only thing you need to do is update your MX records to point to their servers.

    I assume that this is what you did? Or did you update your A record for your existing MX records to point to the google server?

    If all you did was update your MX to point to the Google servers, then you should still be able to log into your old POP/IMAP server using the old settings.

    However if you updated your A records for your MX target (say, mail.example.com) to point to the Google IP address, then you've got a few options:

    1. Create a new A record (say, oldmail.example.com) to point to the IP address of the old mail server. Then connect to that.

    2. Point your mail client's POP settings to the IP address of old mail server (instead of mail.example.com, point it to x.x.x.x)

    From Farseeker
  • Turns out this was quite easy. The old mail server, on my host, serves accounts for all domains, so I just changed the Outlook login to use the host domain name, not the old mail. subdomain, and I could download the mail.

    From BradyKelly

How can I optimize nginx? From benchmarking it seems Apache2 is faster for static delivery

On one of my vps servers I've setup both Apache2 and nginx, nginx on port 8080 and Apache2 on 80, and have created a static HTML file.

static HTML/Apache2:

meder@meder-desktop:~$ sudo ab -n 1000 -c 5 http://medero.org/index.html
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking medero.org (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        Apache/2.2.9
Server Hostname:        medero.org
Server Port:            80

Document Path:          /index.html
Document Length:        1014 bytes

Concurrency Level:      5
Time taken for tests:   6.186 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      1334000 bytes
HTML transferred:       1014000 bytes
Requests per second:    161.67 [#/sec] (mean)
Time per request:       30.928 [ms] (mean)
Time per request:       6.186 [ms] (mean, across all concurrent requests)
Transfer rate:          210.61 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       12   15   2.1     14      35
Processing:    12   16   3.4     15      48
Waiting:       12   16   2.8     15      37
Total:         25   31   4.3     29      63

Percentage of the requests served within a certain time (ms)
  50%     29
  66%     30
  75%     31
  80%     32
  90%     35
  95%     39
  98%     47
  99%     51
 100%     63 (longest request)

static HTML/Nginx:

meder@meder-desktop:~$ sudo ab -n 1000 -c 5 http://medero.org:8080/index.html
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking medero.org (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/0.6.32
Server Hostname:        medero.org
Server Port:            8080

Document Path:          /index.html
Document Length:        1014 bytes

Concurrency Level:      5
Time taken for tests:   6.424 seconds
Complete requests:      1000
Failed requests:        0
Write errors:           0
Total transferred:      1226000 bytes
HTML transferred:       1014000 bytes
Requests per second:    155.67 [#/sec] (mean)
Time per request:       32.119 [ms] (mean)
Time per request:       6.424 [ms] (mean, across all concurrent requests)
Transfer rate:          186.38 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:       13   15   2.5     14      36
Processing:    12   17  11.0     15     184
Waiting:       11   16   9.6     14     171
Total:         25   32  11.4     30     200

Percentage of the requests served within a certain time (ms)
  50%     30
  66%     31
  75%     33
  80%     33
  90%     35
  95%     38
  98%     45
  99%     50
 100%    200 (longest request)

I've done this numerous times and the results are pretty much the same, with Apache2 taking less time to process than Nginx.

Here's the config for nginx:

user www-data;
worker_processes  4;

error_log  /var/log/nginx/error.log;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    access_log  /var/log/nginx/access.log;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;
    tcp_nodelay        on;

    gzip  on;

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
}

And Apache 2.2.9-10 (prefork - nonthreaded):

MaxKeepAliveRequests 100
KeepAliveTimeout 15
<IfModule mpm_prefork_module>
    StartServers          5
    MinSpareServers       5
    MaxSpareServers      10
    MaxClients          150
    MaxRequestsPerChild   0
</IfModule>

Loaded modules:

meder@host:~$ sudo apache2ctl -t -D DUMP_MODULES
Loaded Modules:
 core_module (static)
 log_config_module (static)
 logio_module (static)
 mpm_prefork_module (static)
 http_module (static)
 so_module (static)
 alias_module (shared)
 auth_basic_module (shared)
 authn_file_module (shared)
 authz_default_module (shared)
 authz_groupfile_module (shared)
 authz_host_module (shared)
 authz_user_module (shared)
 autoindex_module (shared)
 cgi_module (shared)
 dir_module (shared)
 env_module (shared)
 mime_module (shared)
 negotiation_module (shared)
 php5_module (shared)
 rewrite_module (shared)
 setenvif_module (shared)
 status_module (shared)
 wsgi_module (shared)
Syntax OK

Server details:

Debian Lenny 5.0.3
32-bit Unmanaged VPS
384MB Ram

processor   : 7
vendor_id   : GenuineIntel
cpu family  : 6
model       : 23
model name  : Intel(R) Xeon(R) CPU           E5405  @ 2.00GHz
stepping    : 6
cpu MHz     : 1995.006
cache size  : 6144 KB
physical id : 1
siblings    : 4
core id     : 3
cpu cores   : 4
apicid      : 7
fpu     : yes
fpu_exception   : yes
cpuid level : 10
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx tm2 cx16 xtpr lahf_lm
bogomips    : 3990.03
clflush size    : 64
cache_alignment : 64
address sizes   : 38 bits physical, 48 bits virtual
power management:

Memory info:

cat /proc/meminfo 
MemTotal:       393216 kB
MemFree:        304828 kB
Buffers:             0 kB
Cached:              0 kB
SwapCached:          0 kB
Active:              0 kB
Inactive:            0 kB
HighTotal:           0 kB
HighFree:            0 kB
LowTotal:       393216 kB
LowFree:        304828 kB
SwapTotal:           0 kB
SwapFree:            0 kB
Dirty:               0 kB
Writeback:           0 kB
AnonPages:           0 kB
Mapped:          88388 kB
Slab:                0 kB
PageTables:          0 kB
NFS_Unstable:        0 kB
Bounce:              0 kB
CommitLimit:         0 kB
Committed_AS:   354892 kB
VmallocTotal:        0 kB
VmallocUsed:         0 kB
VmallocChunk:        0 kB
HugePages_Total:     0
HugePages_Free:      0
HugePages_Rsvd:      0
Hugepagesize:     2048 kB

It appears that mod_deflate isn't even enabled so I'm not even using gzip on Apache2, yet it serves the static HTML faster than nginx. I'm a bit puzzled, could it be that I need to just reconfigure the settings for nginx? Any advice appreciated.

Update #1 - I installed apache2-utils and ran dstat. I also changed the test file so it's now using a 9.7 mb html file, Apache2 and nginx are still pretty consistent. Perhaps I need to limit the amount of memory available or something to bottleneck it..

Here is the dstat running while I queried the 9.7 mb consecutive times:

sudo dstat
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw 
  0   0 100   0   0   0|   0     0 |   0     0 |   0     0 |   0  7230 
  0   0 100   0   0   0|   0     0 |6071B   20k|   0     0 |   0  5534 
  0   0 100   0   0   0|   0     0 | 720B   21k|   0     0 |   0  4749 
  0   0 100   0   0   0|   0     0 | 822B 4788B|   0     0 |   0  5487 
  0   0 100   0   0   0|   0     0 | 288B  408B|   0     0 |   0  4625 
  0   0 100   0   0   0|   0     0 |5595B 4057B|   0     0 |   0  5966 
  0   0 100   0   0   0|   0     0 | 957B 3710B|   0     0 |   0  4904 
  0   0 100   0   0   0|   0     0 | 986B 5013B|   0     0 |   0  6906 
  0   0 100   0   0   0|   0     0 | 872B 3636B|   0     0 |   0  5614 
  0   0 100   0   0   0|   0     0 |  80B  368B|   0     0 |   0  5506 
  0   0 100   0   0   0|   0     0 |  80B  660B|   0     0 |   0  4883 
  0   0 100   0   0   0|   0     0 |1604B 5105B|   0     0 |   0  5087 
  0   0 100   0   0   0|   0     0 | 860B 3708B|   0     0 |   0    13k
  0   0 100   0   0   0|   0     0 | 909B 3619B|   0     0 |   0    11k
  0   0 100   0   0   0|   0     0 |  16k   44k|   0     0 |   0  5920 
  0   0 100   0   0   0|   0     0 | 132B 3256B|   0     0 |   0  6946 
  0   0 100   0   0   0|   0     0 | 184B 3589B|   0     0 |   0  5083 
  0   0 100   0   0   0|   0     0 | 869B 3637B|   0     0 |   0  5528 
  0   0 100   0   0   0|   0     0 | 917B 3576B|   0     0 |   0  5638 
  0   0 100   0   0   0|   0     0 | 9.8k 2299B|   0     0 |   0  5255 
  0   0 100   0   0   0|   0     0 |6205B   11k|   0     0 |   0  7230 
  0   0 100   0   0   0|   0     0 |1712B   35k|   0     0 |   0  4863 
  0   1  99   0   0   0|   0     0 | 243k   25M|   0     0 |   0  7432 
  0   1  99   0   0   0|   0     0 | 337k   33M|   0     0 |   0  8716 
  0   1  99   0   0   0|   0     0 | 297k   35M|   0     0 |   0  6786 
  0   1  99   0   0   0|   0     0 | 349k   33M|   0     0 |   0  7655 
  0   1  99   0   0   0|   0     0 | 338k   33M|   0     0 |   0  7605 
  0   1  99   0   0   0|   0     0 | 324k   34M|   0     0 |   0  7967 
  0   1  99   0   0   0|   0     0 | 320k   35M|   0     0 |   0  7235 
  0   1  99   0   0   0|   0     0 | 333k   35M|   0     0 |   0  7062 
  0   1  99   0   0   0|   0     0 | 355k   35M|   0     0 |   0  6209 
  0   1  99   0   0   0|   0     0 | 299k   33M|   0     0 |   0  8732 
  0   1  99   0   0   0|   0     0 | 369k   34M|   0     0 |   0  8610 
  0   0 100   0   0   0|   0     0 | 352k   34M|   0     0 |   0  7635 
  0   1  99   0   0   0|   0     0 | 331k   34M|   0     0 |   0  8087 
  0   1  99   0   0   0|   0     0 | 312k   35M|   0     0 |   0  6445 
  0   0 100   0   0   0|   0     0 |  81k 7879k|   0     0 |   0  6131 
  0   0 100   0   0   0|   0     0 |  80B 1848B|   0     0 |   0  5124 
  0   0 100   0   0   0|   0     0 | 120B 6216B|   0     0 |   0  5426 
  0   0 100   0   0   0|   0     0 | 120B 3256B|   0     0 |   0  4947 
  0   0 100   0   0   0|   0     0 |  15k   43k|   0     0 |   0  5632 
  0   0 100   0   0   0|   0     0 | 829B 8504B|   0     0 |   0  5913 
  0   0 100   0   0   0|   0     0 |  92B  384B|   0     0 |   0  8680 
  0   0 100   0   0   0|   0     0 | 926B  571B|   0     0 |   0  4843 
  0   0 100   0   0   0|   0     0 | 795B  675B|   0     0 |   0  5479 
  0   0 100   0   0   0|   0     0 | 280B 2048B|   0     0 |   0  4536 
  0   0 100   0   0   0|   0     0 | 172B 1760B|   0     0 |   0  6334 
  0   0 100   0   0   0|   0     0 | 120B  456B|   0     0 |   0  5710 
  0   0 100   0   0   0|   0     0 |  80B  408B|   0     0 |   0  6225 
  0   0 100   0   0   0|   0     0 | 120B  368B|   0     0 |   0  6639 
  0   0 100   0   0   0|   0     0 | 140B  328B|   0     0 |   0  5507 
  0   0 100   0   0   0|   0     0 |7487B 9697B|   0     0 |   0  7201 
  0   0 100   0   0   0|   0     0 | 920B   37k|   0     0 |   0  6086 
  0   0 100   0   0   0|   0     0 | 320B  536B|   0     0 |   0  5756 
  0   0 100   0   0   0|   0     0 |  40B  384B|   0     0 |   0  7153 
  0   0 100   0   0   0|   0     0 |  80B  368B|   0     0 |   0  5227 
  0   0 100   0   0   0|   0     0 |  80B  408B|   0     0 |   0  6042 
  0   0 100   0   0   0|   0     0 | 160B  368B|   0     0 |   0  6730 
  0   0 100   0   0   0|   0     0 |  80B  280B|   0     0 |   0  5424 
  0   0 100   0   0   0|   0     0 |  80B  336B|   0     0 |   0  8042 
  0   0 100   0   0   0|   0     0 |  40B  384B|   0     0 |   0  5559 
  0   0 100   0   0   0|   0     0 |  80B  280B|   0     0 |   0  6266 
  0   0 100   0   0   0|   0     0 |  80B  296B|   0     0 |   0  6198 
  0   0 100   0   0   0|   0     0 |  80B  456B|   0     0 |   0  6499 
  0   0 100   0   0   0|   0     0 |  80B  368B|   0     0 |   0  7143 
  • On a connection that isn't bandwidth-constrained (as I suspect your tiny little connections here are), gzip-compressed content will be slower to transfer than non-gzip-compressed content, because of the extra CPU involved. Compressing your content is usually faster because smaller chunks of data transfer faster, but with that little test it probably won't help. Try comparing apples with apples and see what you get.

    meder : Ok. I turned gzip off on nginx, ran `ab` again and got 7.766s, 6.270s, then 6.5s for those 1000 requests which is still slower than Apache2's 6.1s ( second test was 6.0s ). I did remember to restart nginx, and I'm querying the same exact static HTML content which is 1014 bytes.
    From womble
  • Have you tried optimizing the performance by serving the files from a ramdisk? Some VPSes are notoriously bad for IOwait time, caused by contended access to the disk.

    Try running dstat on the server while the ab process is running, see whether the disks are taking a massive hit.

    meder : I installed `dstat`, edited my original post with some stats while doing the `ab` testing.
  • In order to get realistic results you must have realistic tests. It's completely feasible apache is faster for your test scenario, but are you really serving just a single one-kilobyte file?

    As you're using mpm-prefork, it's safe to say nginx will consume significantly less memory when there are several concurrent transfers. Concurrent transfers pile up easily if you have large files or your clients have slow internet connections. Nginx will win hands down when you have enough concurrent transfers for Apache to eat up all your memory.

    One can argue this is not really an issue as long as there's enough memory for Apache. However, this is not the whole truth. When less memory is consumed by http server, more content from file system will be cached, and every disk seek eliminated will be a small performance victory.

    meder : My server is already being used as a production server and hosts 4-5 sites, one of which gets ~1000ish unique hits per day but I guess that isn't really enough because it always has 300MB of ram available. So what you're saying is Apache2 will always win when there's enough available ram and less concurrent connections?
    womble : Whoa, a whole *1000* hits a day? Look out Twitter!
    meder : Not 1000 hits, 1000 uniques but probably a several thousand *hits*, of course I know that's nothing compared to the sites with millions/billions of hits per day, but I was just trying to say it isn't *completely* underutilized.
    af : Well, if you get a 100k hits a day, every request takes three seconds to serve and they are distributed evenly throughout the day, you'll have 10+X apache processes running with mpm-prefork, where X is you MinSpareServers setting. You'll have plenty of safety margin, so there's not much to optimize.
    From af
  • Your test is flawed. -c 5 does not properly test either server. an event based server like nginx is best at handling thousands of concurrent, and possibly slow downloads at once. You tested 5 concurrent downloads. -n 20000 -c 1000 might start to show nginx performing better.

    Try running this tool against both servers, and see which one falls over first

    I bet it won't be nginx :-)

    meder : thanks, I was waiting for a response like this - I'll do more benchmarking and will update!
    From Justin

Apache redirect based on hostname

I have multiple hostnames resolving to a single machine, e.g.:

  • build.mydomain.com
  • www.mydomain.com
  • jira.mydomain.com
  • mydomain.com

Is it possible to setup apache in order to redirect requests to each different hostname?

e.g:

  • build.mydomain.com -> build.mydomain.com:8111,
  • www.mydomain.com -> www.mydomain.com:8080
  • mydomain.com -> www.mydomain.com:8080

The DNS records are setup to all point to the same machine, i just want to redirect to the right port given the hostname.

Cheers,

Edit:

Machine is debian w/ Apache2

Edit2:

<VirtualHost *:80>
    ServerName www.mydomain.com
    ServerAlias mydomain.com
    redirect 301 / http://www.mydomain.com:8080/
</VirtualHost>

<VirtualHost *:80>
    ServerName build.mydomain.com
    redirect 301 / http://build.mydomain.com:8111/
</VirtualHost>
  • Are you running multiple instances of Apache, each listening on a different port? Why? Virtual hosts will take care of everything.

    http://httpd.apache.org/docs/2.2/vhosts/name-based.html

    slappybag : No, single instance of apache that I wanted to use to redirect requests. Got multiple instances of tomcat running, e.g. TeamCity on 8111, tomcat on 8080 and JIRA on 8081. I wanted to redirect to the respective port given the hostname.
    MidnighToker : how about mod-rewrite running on the apache:80 instance? possibly even apache as proxy but that sounds like far too much effort for a redirect.
    slappybag : I have it half working, using VirtualHost directive with a redirect 301 to the same hostname and different port. This should do for now. Cheers,
    From ynguldyn
  • Actually, I doubt using redirect is winning strategy for what you're trying to do. I'd recommend using mod_proxy to create a reverse proxy which will hide the way you've built your system.

    And if I had the choice, I would use something more lightweight and more convenient to configure, like Perlbal.

    From af

Active Directory Group Policy: Script Errors

Hello all. Anyone having issues with AD group policy script errors when enabling VMware Fusion's "Sharing" feature? I've run into this problem in version 2.0 and 3.0. I have a logon script applied on an AD OU. It works fine on all Windows client workstations and in VMware Fusion only when the "Sharing" feature is NOT enabled. Any ideas would be much appreciated. Thanks.

  • Can you run the logon script manually once you log in? any error output ? What is failing about the script?

    My completely off the wall guess is that the sharing feature is mapping drives that your script then tries to map, causing it to fail.

    From Zypher

How to document mail setup after hand-over.

I've just moved a client's email services over from my host to Google Apps. I would like to hand over a document providing all they (or their agent) need should I not be available etc.

How are such documents normally structured, and what level of detail should they contain? I know user names and passwords are essential, and instructions on how to manage domains on Google Apps are over the top, but what is a commonly used middle ground?

  • I am not sure how the documents would normally be structured, but in this scenario I would think documenting the following is appropriate...

    • administrative usernames and
    • passwords guidelines for creating new
    • accounts (e.g. naming conventions)
    • changes you made to MX records
    • changes you made to DNS cautionary
    • statements about making changes to
    • DNS if they have the ability links to
    • access their services (e.g. the DNS
    • entries you setup) links to Google's
    • client oriented documentation
    • (pop3/imap/smtp instructions etc...)
    • links to Google's administrative
    • oriented documentation

    One to two pages should be sufficient. Include a short summary of what Google Apps is and how it is different than hosting your own email or using your ISP's services.

  • We tend to hand over build guides - if it contains a systematic description of how to build the system from scratch, it's probably got all the essential details. Add an FAQ or links to Google's howtos, and you should be fine.

    From caelyx

Error with Auto admin logon in Vista

I have a script that I run to set up computers to auto login as administrator. I use Vista home premium on these computers. I install them with MDT 2010 and after that is finished I have placed a script that I run to set auto admin logon by writing to the registry.

The problem is that for some reason the keys in the registry is reset after a reboot. If I run the script once again it works and the keys are not reset. (I make the script delete itself at the end to make the workflow faster).

Does anyone know why the keys are reset?

I include my script below.

Option Explicit

Dim Temp
Dim oReg
Dim strComputer
Dim strResult
Dim intResult
Dim readValue
const HKEY_LOCAL_MACHINE = &H80000002
strComputer = "."
strResult = ""
Set oReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & strComputer & "\root\default:StdRegProv")


Temp = WriteReg("SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\", "DefaultUserName","TobiiUser")
Temp = WriteReg("SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\", "DefaultPassword","Tobii")
Temp = WriteReg("SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\", "AutoAdminLogon","1")
Temp = WriteReg("SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\", "DefaultDomainName",".")

Function WriteReg(strKeyPath, strValueName, strValue)



 ' Create key to use
 intResult = oReg.CreateKey(HKEY_LOCAL_MACHINE, strKeyPath)
 If (intResult = 0) And (Err.Number = 0) Then   

  ' write string value to key    
  intResult = oReg.SetStringValue(HKEY_LOCAL_MACHINE,strKeyPath,strValueName,strValue)
  If (intResult = 0) And (Err.Number = 0) Then 

   intResult = oReg.GetStringValue(HKEY_LOCAL_MACHINE,strKeyPath,strValueName,readValue)
   If readValue = strValue Then
    strResult = strResult & "Succeded writing key: " & HKEY_LOCAL_MACHINE & strKeyPath & strValueName & VbCrLf
    End If

  Else
   strResult = strResult & "Failed writing key: " & HKEY_LOCAL_MACHINE & strKeyPath & strValueName & " with error no: " & intResult & VbCrLf
  End If
 Else
  strResult = strResult & "Failed creating key: " & HKEY_LOCAL_MACHINE & strKeyPath & strValueName & " with error no: " & intResult & VbCrLf
 End If

End Function


'Delete the script
DeleteSelf
MsgBox strResult, vbInformation, "Autologon"

Sub DeleteSelf()        
        Dim objFSO
        'Create a File System Object
        Set objFSO = CreateObject("Scripting.FileSystemObject")
        'Delete the currently executing script
        objFSO.DeleteFile WScript.ScriptFullName
        Set objFSO = Nothing
End Sub
  • Hi, did you solve your problem? I ask you, because i've the same problem and for the moment i'm not able to go out from this impasse! thanks

    Ola : Yes I did! It was a problem with that I had the registry value AutoLogonCount = 0, while shutting down windows checks if this is equal 0 and if so it clears DefaultPassword and sets AutoAdminLogon to 0. So make sure to delete the key AutoLogonCount. /Ola
  • The problem was that AutoLogonCount was 0, if it is zero windows clears DefaultPassword and sets AutoAdminLogon to 0 at shutdown therefore removing my recent changes. The Solution was to delete the key AutoLogonCount.

    From Ola
  • Thank you... i just try it and it solve my problem!

basic svn repository set up for HTTP

I have SSH access to a server on which I am trying to create a repository. I don't know if apache is set up correctly for this, but the server does have svn, and it does mod-dav-svn. Anyways, I ssh into the server:

-bash-3.00$ mkdir httpdocs/svn
-bash-3.00$ svnadmin create httpdocs/svn

I check that the repo was created successfully (it was), and then using tortoise svn, do a checkout from http://server.com/svn/.

The error returned was Repository moved permanently to 'http://server.com/svn/'; please relocate.

What could that indicate?

Note that I don't have root access to the server, and the sysadmin has gone to bed for the night. At this point, what steps need to be taken to get a repo up and running? A virtual host setting in apache? What about a .htaccess? Anything?

update

I'm not sure if this will still get looked at now, but: The sysadmin added this to the conf file for my vhost,

<Location /svn>
DAV svn
SVNParentPath /var/www/vhosts/domain/svn
AuthType Digest
AuthName "Subversion Repository"
AuthUserFile /etc/myauthfile
Require valid-user
</Location>

the /etc/myauthfile was set up accordingly with the users for it. I made sure to set up the repository outside of my httpdocs and httpsdocs directory as to not conflict with any other namespaces -- I went to /svn in my home directory and created a repository. I tried to check it out from my computer, and--

svn: OPTIONS of 'http://domain/svn': 200 OK (http://domain)

This is confusing... Trying to access it via the browser yields a 404.
Is there something else I'm missing?

  • This is a FAQ on the Subversion site. You cannot enable Subversion access over http without access to re-configure Apache:

    http://subversion.tigris.org/faq.html#http-301-error

    http://svnbook.red-bean.com/en/1.5/svn.serverconfig.httpd.html

    From Alex Holst
  • Your main problem is down the the location you chose for your repository - You created the repository in your public_html folder.

    The extra <Location></Location> stuff that the admin added to the apache config is correct, however it needs to point to a directory that is not already handled by Apache.

    The reason you're just getting 404, and 200 responses is because Apache is intercepting your http SVN request via its regular 'website hat', rather than passing the request onto SVN.

    Move the repository into a different directory that is outside of Apache's regular control, get the <location> bit updated to point to the new location of the repository.

Squid, NTLM, Windows 7 and IE8

I'm running Squid 2.7-stable4, Samba 3 and the Windows 7 RC with IE8.

I have NTLM authentication setup on my squid proxy server and it works fine for every combination of browser and Windows (including IE8 on XP and Firefox on Win7), but it doesn't work (keeps asking for authentication) for IE8 on Windows 7.

I can get it to work using the LmCompatibilityLevel registry hack, but I'd really prefer to get it working on the server.

Does anyone have any experience with this? Or know where to start looking? The samba logs don't reveal much.

EDIT: Here's what the wb-MYDOMAIN log says when I attempt to authenticate:

[2009/08/20 15:13:36, 4] nsswitch/winbindd_dual.c:fork_domain_child(1080)
  child daemon request 13
[2009/08/20 15:13:36, 10] nsswitch/winbindd_dual.c:child_process_request(478)
  process_request: request fn AUTH_CRAP
[2009/08/20 15:13:36, 3] nsswitch/winbindd_pam.c:winbindd_dual_pam_auth_crap(1755)
  [ 4127]: pam auth crap domain: MYDOMAIN user: MYUSER
[2009/08/20 15:13:36, 0] nsswitch/winbindd_pam.c:winbindd_dual_pam_auth_crap(1767)
  winbindd_pam_auth_crap: invalid password length 24/282
[2009/08/20 15:13:36, 2] nsswitch/winbindd_pam.c:winbindd_dual_pam_auth_crap(1931)
  NTLM CRAP authentication for user [MYDOMAIN]\[MYUSER] returned NT_STATUS_INVALID_PARAMETER (PAM: 4)
[2009/08/20 15:13:36, 10] nsswitch/winbindd_cache.c:cache_store_response(2267)
  Storing response for pid 4547, len 3240
  • You can't really do this in NTLM. You have to use kerberos, as described at http://serverfault.com/questions/66556/getting-squid-to-authenticate-with-kerberos-and-windows-2008-2003-7-xp.

    From Harley
  • Run local GP on W7 (don't remember but in the 2000 and 2003 it is gpedit.msc). Look for local machine policy-> computer config->windows setting->local policies->security option->Network security: LAN Manager authentication level

    Set LM & NTLM - Use NTLMv2 session if negotited

  • I used squid on openSUSE11.2 NTLM authentication it work, I cant authenticate from Windows 7.

    Sam Cogan : Firstly, please do not add your own question to someone else's, start your own question. Secondly please do not include your blog URL as a signature.
  • I modified the local policy and it works!. thanks!!!

  • The right solution is to use ntlm_auth program from a more recent samba distribution: samba 3.4 and samba 3.5 seems to authenticate Win7 with NTLMv2 without problems. Samba 3.0 was unable to do it.

    From Giovanni

Conditional DNS forwarding with named on Linux

I have a Centos 5.2 server which runs named for DNS resolution - it doesn't hold any information of its own, and just forwards all requests. From the named.conf:

options {
[...]
        forwarders { 1.1.1.1; 1.1.1.2; };
};

All other lines in named.conf are left as default.

I want to change the configuration so requests for anything under newdomain.com get passed to 22.22.22.22, while requests for any other address go to 1.1.1.1 or 1.1.1.2

How can I configure the DNS on this server to do this?

  • Can you operate as a slave for newdomain.com? i.e., do a full transfer?

    MidnighToker : just did this after having problems with forwarding, by far the easiest option -assuming the admin of the other server allows your server to do a full transfer.
  • hehe, I up-voted the previous answer before doing some fettling myself.

    Right, so, if you edit your named.conf and add the following:

    zone "newdomain.com" {
        type forward;
        forward only;
        forwarders { 22.22.22.22; };
    };
    

    now you won't be able to do reverse lookups easily, you'll have to modify the following zone statement to make sense for the IP address(s) of the domain (this was originally a reverse for 192.168.80.0/24).

    zone "80.168.192.in-addr.arpa" {
        type forward;
        forward only;
        forwarders { 22.22.22.22; };
    };
    

    After making the changes, you should

    1. Check that you havn't faffed up the config files: named-checkconf

    2. Tell bind to reload its config: rndc reload (much prefered to /etc/init.d/bind reload )

    Bear in mind this will return non-authorative answers for the domain. The way around this (and to offer better local caching should the remote DNS be problematic) would be to act as a slave for the zone.


    edited to add the forward only; statement. this will cause the query to fail after trying the server(s) specified in forwarders, rather than failing and then trying a standard lookup. Also edited to change /etc/init.d/bind reload to rndc reload after advice in comments.

    Zypher : The command 'rndc reload' is the prefered method to reload bind configuration files instead of using the init scripts to restart the daemon
    MidnighToker : Zypher -thanks for setting me right about using rndc -I didn't realise.
    DrStalker : Thanks, it's working perfectly.
  • If you are trying to optimize, and 22.22.22.22 is auth for that zone, you can also use a stub zone:

    zone "newdomain.com" {
        type stub;
        masters { 22.22.22.22 };
    };
    

    This does something slightly differently than forwarding. It will query the server 22.22.22.22 for NS records, and keep them in the cache at all times. This will do almost the same thing, but if another NS host (say, 33.33.33.33) was also listed, your server would then learn about it and use it as well.

    I believe a stub zone here is a better option than conditional forwarding.

Multiple boot options (in grub boot list) appeared after yum update.

I am quite new to Linux administration so I am having trouble understanding yum update and grub boot list.

I recently did a yum update on an old CentOS machine. Everything is good except there is multiple boot options appear in grub boot list and I wonder why? I managed to do a Google search and figured I can manually configure the boot order in /etc/grub.conf.

  1. Does it mean I now have multiple OS installed?
  2. Will my grub list grow if I do future yum update?
  3. Do I need to clean up old item from the list?

Thanks.

  • If you install a new kernel through yum it will appear in your boot list. I think yum takes the current one, makes it a second entry, then makes the new kernel the default/first entry. This lets you boot the old kernel if you need after updating and having a problem.

    1. No, just other versions (usually older) of the kernel, as noted above.

    2. Yes, each time you update the kernel, you will get a new entry.

    3. No. Probably the easiest way if you want to would be to go to /boot and remove the older kernels and related files (they will have the same string in the middle, such as 2.6.9-42). I would at a minimum keep the current and previous version (ie two known good configs), just in case. But frankly, who cares? Not much space(14MB for the example bellow), and you can just ignore the old stuff, as it's down the bottom of the screen.

    some_hostname Sun Jan 03 19:17:58 /boot
    root > ls -1t
    grub                             <- boot loader config files
    initrd-2.6.9-78.0.13.EL.img          <- the 2.6.9-78 related files
    initrd-2.6.9-78.0.13.ELsmp.img       <- for both smp (multi core/thread)
    symvers-2.6.9-78.0.13.ELsmp.gz       <- and uni processor
    config-2.6.9-78.0.13.ELsmp           <-
    System.map-2.6.9-78.0.13.ELsmp       <-
    vmlinuz-2.6.9-78.0.13.ELsmp          <-
    symvers-2.6.9-78.0.13.EL.gz          <-
    config-2.6.9-78.0.13.EL              <-
    System.map-2.6.9-78.0.13.EL          <-
    vmlinuz-2.6.9-78.0.13.EL             <-
    initrd-2.6.9-42.ELsmp.img
    initrd-2.6.9-42.EL.img
    lost+found
    config-2.6.9-42.ELsmp
    System.map-2.6.9-42.ELsmp
    vmlinuz-2.6.9-42.ELsmp
    config-2.6.9-42.EL
    System.map-2.6.9-42.EL
    vmlinuz-2.6.9-42.EL
    message
    message.ja
    some_hostname Sun Jan 03 19:18:05 /boot
    root > 
    

asus rs700-e6/rs4 ipmi question

This question is regarding this server: http://www.newegg.com/product/product.aspx?Item=N82E16816110040

I'm trying to figure out if i actually need to buy a ASMB4-iKVM (is it a chip?)

The info i have found on google is very vague.

Do I need this "ASMB4-iKVM" to get IPMI to work?

sorry if this is a stupid question.

  • Newegg shows the model you linked to as "deactivated" (discontinued?), but a search of their site lists a few other motherboards with "ASMB4-iKVM" as part of their model number. You'll need to contact them to see if it's included.

    Here is what ProVantage shows for the ASMB4-iKVM (for $55.55).

What is Multipath I/O?

What exactly does "Multipath I/O" mean?

  • This term is used most commonly in reference to how SAN storage volumes get connected to the servers that they're assigned to. For instance, with a multipath fibre channel setup, there would be redundant fiber paths between the SAN and the server, with each path going through different FC switches, connecting to different FC cards, etc. This way, if any single piece of hardware goes down (be it a FC switch, FC card, fiber patch, etc.), IO will still be able to continue. The same principles can be applied to iSCSI.

    From ErikA
  • In addition to ErikA's excellent response, multipath io (MPIO) not only provides a redundancy enhancement, but also a performance enhancement if both/multiple paths are utilized.

    From SirStan

Setting Up Customer-Specific Domains

I can go to Fog Creek's web site, setup a new account, and they will instantly assign me a URL such as 'mycompany.fogbugz.com' (where 'mycompany' is something I make up, as opposed to some value assigned by Fog Creek). I can do the same type of thing with Beanstalk and many other vendors. I have been Googling around trying to figure out exactly how this works.

1: In the above example, is 'mycompany.fogbugz.com' set up in DNS in some special way other than how one would setup a vanilla 'www.foo.com' domain?

2: Assuming Fog Creek uses Tomcat (which I am sure is NOT true, but pretend it is) would they be likely to have created a tomcat/webapps/mycompany subdirectory on their server? Or is there some simpler way to handle this?

I'm obviously not a DNS or TC wizard. Any insight appreciated. Happy New Year!

  • This is what's called a wildcard subdomain (in the dns) which is then handled using url rewriting.

    A wildcard subdomain looks like this:

    *.domain.tld.      IN  A    1.2.3.4
    

    Then you can set apache to accept requests to any subdomain:

    <VirtualHost 111.22.33.55>
        DocumentRoot /www/subdomain
        ServerName www.domain.tld
        ServerAlias *.domain.tld
    </VirtualHost>
    

    Then you can use mod_rewrite to redirect traffic on one of these subdomains to a subfolder or a query string. Something like this:

    RewriteCond %{HTTP_HOST} ^(www.)?([a-z0-9-]+).domain.com [NC]
    RewriteRule (.*) %2/$1 [L]
    
    Martijn Heemels : You can actually avoid the mod_rewrite and do it all in a single VirtualHost block, by using VirtualDocumentRoot. This is called 'Mass Virtual Hosting'. See http://httpd.apache.org/docs/2.2/vhosts/mass.html. This allows you to simply create a new website by making a directory. For example if you use the wildcard subdomain in DNS, you can set 'VirtualDocumentRoot /var/www/%-3'. Then, if you simply make a directory /var/www/mysite it will be visible as website mysite.domain.tld. Easy isn't it? The %-3 means, split the hostname and take the third part from the right, i.e. 'mysite'.
    From adam
  • I dont know about tomcat, but in IIS if the website is set to an IP address (ie no specific host-header/subdomain) all subdomains will point to same site (not sure of the exact terminology here)

    If this is the case you can programatically detect the subdomain and react accordingly.

  • One exemplary way to do this is subdomain_fu, which is a subdomain-handler for rails, explained in this screencast: http://media.railscasts.com/videos/123_subdomains.mov.

    Conceptually: You can set up apache with a subdomain catch-all server alias and then do the subdomain processing withing your webframework.

    From The MYYN
  • Wow. It seems ServerFault is as useful as StackOverflow. Awesome. Thanks guys!

programatically check if a domain is availible?

Using this solution http://serverfault.com/questions/98940/bot-check-if-a-domain-name-is-availible/98956#98956 I wrote a quick script (pasted below) in C# to check if the domain MIGHT be available. A LOT of results come up with taken domains. It looks like all 2 and 3 letter .com domains are taken and it looks like all 3 letter are taken (not including numbers which many are available). Is there a command or website to take my list of domains and check if they are registered or available?

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

using System.Text.RegularExpressions;
using System.Diagnostics;
using System.IO;

namespace domainCheck
{
    class Program
    {
        static void Main(string[] args)
        {
            var sw = (TextWriter)File.CreateText(@"c:\path\aviliableUrlsCA.txt");
            int countIndex = 0;
            int letterAmount=3;

            char [] sz = new char[letterAmount];
            for(int z=0; z<letterAmount; z++)
            {
                sz[z] = '0';
            }
            //*/
            List<string> urls = new List<string>();
            //var sz = "df3".ToCharArray();
            int i=0;
            while (i <letterAmount)
            {
                if (sz[i] == '9')
                    sz[i] = 'a';
                else if (sz[i] == 'z')
                {
                    if (i != 0 && i != letterAmount - 1)
                        sz[i] = '-';
                    else
                    {
                        sz[i] = 'a';
                        i++;
                        continue;
                    }
                }
                else if (sz[i] == '-')
                {
                    sz[i] = 'a';
                    i++;
                    continue;
                }
                else
                    sz[i]++;
                string uu = new string(sz);
                string url = uu + ".ca";
                Console.WriteLine(url);
                Process p = new Process();
                p.StartInfo.UseShellExecute = false;
                p.StartInfo.RedirectStandardError = true;
                p.StartInfo.RedirectStandardOutput = true;
                p.StartInfo.FileName = "nslookup ";
                p.StartInfo.Arguments = url;
                p.Start();
                var res = ((TextReader) new StreamReader( p.StandardError.BaseStream)).ReadToEnd();
                if (res.IndexOf("Non-existent domain") != -1)
                {
                    sw.WriteLine(uu);
                    if (++countIndex >= 100)
                    {
                        sw.Flush();
                        countIndex = 0;
                    }
                    urls.Add(uu);
                    Console.WriteLine("Found domain {0}", url);
                }
                i = 0;
            }
            Console.WriteLine("Writing out list of urls");
            foreach (var u in urls)
                Console.WriteLine(u);
            sw.Close();
        }
    }
}
  • This is the kind of request that has to be done via a WHOIS, and they rely on published registrar entries. Some TLD's (such as .to) do not publish a public WHOIS. Others, such as .com.au, each registrar maintains their own WHOIS, so you need to find out which registrar the domain is registered with an then query their WHOIS.

    Also, all web-based WHOIS services I've used all feature a captcha, exactly to stop people from doing what you're trying to do. This is for many reasons, and a big one is to stop cyber-squatting.

    Speaking of which, if you're doing this with the intention of cyber-squatting: a) shame on you, b) this is actually illegal in some countries and TLDs.

    Finally, all of that said, there are plenty of WHOIS components for C#. This is one I found randomly.

    acidzombie24 : Looks like cyber-squatting has already been done on the .CA domain. So far i only found a domains available which have numbers or - in them. That component is on a suspicious site. I wont try it, i wanted to find a preferable 3 letter domain. I guess i'll use one of the 3 i like with numbers
    Farseeker : Hi acidzombie. That was just one URL I found by doing a very quick google. There's plenty of other whois components around, but you need to know which registrar to query.
    From Farseeker

postgres on 64 bit linux or 64 bit windows server 2008?

Postgres does not have a 64 bit binary for windows server

quote "As there is generally no reason to run with shared_buffers > 256 - 512MB on Windows, there isn't a great deal of incentive to put in the effort required for the 64 bit port"

  1. why is there generally no reason to run with lots of memory on windows?
  2. would a 64 bit linux installation be more efficient? if so, which?

this server has 8Gb memory and this number will likely increase to 12Gb. we intended to allocate almost all of the memory to postgres. for what i'm doing i can happily do without a UI.

  • http://swik.net/PostgreSQL/Planet+Postgresql/Magnus+Hagander:+PostgreSQL+vs+64-bit+windows

    Postgres leaves disk caching up to the operating system, so there would be little benefit to a 64 bit build on Windows. The developers have decided to spend their time on something more productive and less painful.

    But, there is some benefit, since 64 bit code is a bit faster and a bit smaller, and since the Unix version had to be made 64 bit clean for some architectures (Itanium and Alpha, particularly), that job was done a long time ago.

    Personally, I'd default to running a dedicated database server on Linux. However, you will have to weigh up the cost of the Windows license against the administration skills; in my environment, everyone knows how to administer Linux boxes, whereas you may have to hire someone to do that, or learn enough (not a good idea if security is critical). Which distribution to use is mostly about what your administrator(s) are current with; in my case, that would be Ubuntu Server Edition.

    pstanton : thanks, have cancelled win server and am self installing fedora.
  • Work is being done for the next version, and most likely PostgreSQL 8.5 will run as a native 64-bit binary on Win64. It remains to be seen how many of the third party pieces will work (for example, TCL and MIT Kerberos don't currently provide 64-bit versions on Windows), but the core database should be available.

    Note that this would really only be necessary if you want either total work_mem or shared_buffers to be very large. In most cases, that won't be a problem, but if you are running large data-warehouse style queries for example, it might be interesting even with as little memory as your server. But it's mainly being developed to deal with large memory systems, and compatibility with third party libraries.

    That said, PostgreSQL will run faster on a Linux/Unix based platform, so if you have that as an option, you should go with it. PostgreSQL has been designed for a Unix architecture, and keeps this architecture on Windows (for example, processes rather than threads), which makes it slower there.

    As for which distribution you choose, it doesn't matter from a PostgreSQL perspective. Pick something that your administrators, or someone you work with, feel comfortable with.

    Oh, and the proper URL for the blog post referred in Andrews answer is http://blog.hagander.net/archives/73-PostgreSQL-vs-64-bit-windows.html, and it contains an explanation on the memory issue with 32-bit PostgreSQL on 64-bit Windows.

colocation wants minimum of 1 MBit transfer

if a colocation company wants you commit to at least 1 MBit transfer, I'm curious what that translates to monthly assuming one maxes out on the 1 mbit.

any general rule of thumbs in terms of how many Amps to get?

  • I think 1Mbps works out to around 2.5Tb per month. But you need to be careful how it is billed b/c being charged for TOTAL data transferred is very different from 95th percentile billing.

    As far as amps, I would check the specs of the hardware you plan on using.

    womble : It's about 2.5Tb/month, not 2.5TB.
    malonso : Ooohh, good catch. Answer update.
    : womble, you say 328 or 2.5TB?
    womble : I don't say 2.5TB... that's why I wrote "not 2.5TB"
    : ah, so tera bits / 8 = 328GB thanks.
    From malonso
  • If you run a 1MBit connection continually maxed out, you can get about:

    730.5*3600/8 = 328725 ~= 328GB
    

    per month down that channel.

    HOWEVER, there is almost no chance that you'll consistently use exactly that much bandwidth throughout a month, even if you're doing something fairly consistent, traffic wise (spamming, off-site backups, etc) -- you'll still have dips and troughs. For web traffic, a 4:1 peak:average ratio is my standard ratio.

    In my experience, these days it works out cheaper to buy a data allocation rather than a fixed-width pipe, as by the time you apply peak ratios and buy an appropriately sized pipe it's more expensive than just buying the data. The only time I wouldn't do that is if I was working for someone with an absolutely fixed connectivity budget, where sticking to the numbers was far more important than good performance. I'd also be looking for a new client to work for.

    As far as power goes, (and this is a completely separate question that should have been asked as such) that is dependent on the equipment you're installing. There's any number of previous questions here on serverfault dealing with that issue.

    : can you explain the 730.25 and why /8?
    : so is it 328GB or 2.5TB, or are your referring to something else?
    womble : 730.5 => number of hours in an average month (365.25 * 24 / 12 -- I hope you can work out what those numbers are); divided by 8 is to convert bits into bytes (well, octets, anyway).
    From womble