Saturday, January 15, 2011

How to count all subfolders in an directory?

How can I count how many folders (including subfolders) I have under an specific folder?

  • Use find to select the directories and wc to count them.

    find <directory> -mindepth 1 -type d | wc -l
    
    : Wow, is there an faster solution? ;-) ------------------------------------------------- time find product_images/ -mindepth 1 -type d | wc -l 574292 real 67m34.672s user 0m4.272s sys 0m35.790s -------------------------------------------------
    Dan Carley : That's a big tree. The only way you can speed it up is to work on an index of that structure. But creating and updating the index will be just as slow. And the most common tool for indexing and searching the local FS, `locatedb`, won't allow you to refine your search in the same way as `find`.
    asdmin : yes, with 574292 subdirs and who knows what many files, it's the fastest way to cound the dirs
    From Dan Carley
  • tree -d | grep directories | awk -F' ' '{print $1}'

    Dennis Williamson : I'd use `tail` since there could be a directory called "directories" and there's no need for the -F with the awk: `tree -d|tail -n1|awk '{print $1}'`
    From dyasny

How to update SSL certificate with Tomcat 5.5

My client is running Tomcat 5.5 and is using SSL. Their certificate is about to expire and they have purchased a renewal. I was given a .cer file and asked to update Tomcat.

The existing server.xml contained the following connector:

<Connector port="443" maxHttpHeaderSize="8192"
           maxThreads="150" minSpareThreads="25" maxSpareThreads="75"
           enableLookups="false" disableUploadTimeout="true"
           acceptCount="100" scheme="https" secure="true"
           clientAuth="false" sslProtocol="TLS"
           keystoreFile="companyname.keystore" keyAlias="tomcat2" />

I ran %JAVA_HOME%\bin\keytool -list -keystore companyname.keystore

Keystore type: jks
Keystore provider: SUN

Your keystore contains 3 entries

root, Aug 7, 2007, trustedCertEntry,
Certificate fingerprint (MD5): 8F:5D:77:06:27:C4:98:3C:5B:93:78:E7:D7:7D:9B:CC
tomcat, Jun 12, 2007, keyEntry,
Certificate fingerprint (MD5): 33:80:6F:75:5A:B4:BC:C7:7A:7D:4F:3F:FA:C0:95:2F
tomcat2, Jun 14, 2008, keyEntry,
Certificate fingerprint (MD5): 0A:9B:73:6A:EE:2F:18:99:61:49:28:F3:CD:1E:DF:96

SSL still works if I delete the entry with the alias "tomcat". I'm assuming that's an artifact from a previous expired certificate.

%JAVA_HOME%\bin\keytool -import -keystore companyname.keystore -alias tomcat3 -file 2009cert.cer

I updated server.xml to set keyAlias to tomcat3. When I restart Tomcat, I see this in the log:

SEVERE: Error initializing endpoint
java.io.IOException: Alias name tomcat3 does not identify a key entry
    at org.apache.tomcat.util.net.jsse.JSSE14SocketFactory.getKeyManagers(JSSE14SocketFactory.java:143)
    (etc.)

When I re-run the keytool -list command:

Keystore type: jks
Keystore provider: SUN

Your keystore contains 4 entries

root, Aug 7, 2007, trustedCertEntry,
Certificate fingerprint (MD5): 8F:5D:77:06:27:C4:98:3C:5B:93:78:E7:D7:7D:9B:CC
tomcat, Jun 12, 2007, keyEntry,
Certificate fingerprint (MD5): 33:80:6F:75:5A:B4:BC:C7:7A:7D:4F:3F:FA:C0:95:2F
tomcat3, Jul 21, 2009, trustedCertEntry,
Certificate fingerprint (MD5): 8E:9F:F9:52:7B:07:B1:DB:BF:F3:96:BD:5F:49:2E:9F
tomcat2, Jun 14, 2008, keyEntry,
Certificate fingerprint (MD5): 0A:9B:73:6A:EE:2F:18:99:61:49:28:F3:CD:1E:DF:96

Does this have something to do with the tomcat3 entry being marked as "trustedCertEntry" rather than "keyEntry"?

What am I doing wrong?

  • The fact that it's registering as a TrustedCert would seem to indicate that there's no key for tomcat3. It's likely that the new certificate was requested for the existing key tomcat2. Keys themselves don't expire, just the certificates.

    You can request a new certificate at any time either by generating a new cert signing request or by reusing the original one either of which is fine. Take a backup copy of your keystore and then import the certificate for the tomcat2 alias.

    %JAVA_HOME%\bin\keytool -import -keystore companyname.keystore -alias tomcat2 -file 2009cert.cer
    

    After that, you'll also want to point your tomcat instance back at tomcat2.

    Jeremy Stein : When I try that, I get this error: keytool error: java.lang.Exception: Public keys in reply and keystore don't match
    Frenchie : By the look of it, the only other key in the keystore is tomcat, so give that a try instead of tomcat2. Failing that, someone has gotten a csr from another key elsewhere. Importing keys into a keystore is possible but it's non trivial.
    Jeremy Stein : You nailed it. It turns out the client had created a new private key and CSR and hadn't mentioned that fact. Thanks!
    From Frenchie

Open Source alternative to "Google Appliance" for intranet search?

Are there any alternative open source solutions (with a web console)?

  • Try this: http://www.flax.co.uk/index.shtml

    (I have no experience with this product or other enterprise search products).

  • I have used 'htdig' in past for intranet search. It is good and indexes pdf documents by default. Once you can add filters that can translate documents to text format for indexing, it will start supporting other formats as well.

  • Its not open source, but Microsoft Search 4.0 is free at this link

    I would say its worth trying, I liked the formatting of the results returned, but the problem was the results would include documents a user could not access due to security. So it was no good for us since document names can contain restricted information too, such as "Bob-Warning Letter.doc"

  • I've found a solution with Google Desktop Search (Which can be used like a web appliance with a plugin): read more...

    SpaceManSpiff : Sounds like the same thing Microsoft Search Server will do, but with more effort.
    Martin K. : More effort but free of charge!
    SpaceManSpiff : Microsoft's Search Server Express is completely free of charge and the only difference between it and the Enterprise edition is that Express does not have load balancing. By the way need for DNKA plug in according to your link has a small charge for commercial use. Funny how google requires you to use their hardware for their enterprise searches. I think google could clean up in this area if they released a server edition software that could be installed on your own server.
    Martin K. : The information from the page is outdated! DNKA is now free for commercial use. The solution is absolutly free of charge. When I try to download the express edition, everywhere "Demo" or "Test" is shown!? Why should google require to use their hardware? The solution I mentioned is free of charge and requires only an windows environment. It works also with Mozilla as client (eg. from Unix/Linux boxes). I've read that the google search performance is significantly better.
    SpaceManSpiff : Try this link - http://www.microsoft.com/enterprisesearch/en/us/search-server-thankyou.aspx You were likely trying download the full enterprise which is a trial edition. Cool that your DNKA is free now. So is this one. What I was trying to say is that a company to have an intranet search from google (without it being a mashup) requires a google appliance. It would be great if google did a software only enterprise intranet search, that was not a mashup of their desktop search & 3rd party tools. More admin's would prefer that I think.
    Martin K. : They have souch a product (Google Appliance) for sale. I don't think that they'll give it away for free. Google Desktop search doesn't rely to any specific hardware (excepting x86, but that's your windows solution too).
    From Martin K.
  • Solr, from the Apache Lucene project. Excerpt from the web site

    http://lucene.apache.org/solr/

    Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, a web administration interface and many more features. It runs in a Java servlet container such as Tomcat.

    From Will Glass

Strange GD error when working on PNG images from PHP on FreeBSD

I have a problem with my FreeBSD 7.1 server. PHP's GD implementation no longer works on PNG images. Whenever the system tries to work with PNG images, I get these three error messages:

[Sat Jul 18 21:41:15 2009] [error] [client 90.34.34.34] PHP Warning:  imagecreatefrompng() [function.imagecreatefrompng]: gd-png:  fatal libpng error: [00][00][00][00]: unknown critical chunk in /usr/storage/www/private/mikkel.hoegh.org/modules/acquia/imageapi/imageapi_gd.module on line 44, referer: http://mikkel.hoegh.org/admin/build/imagecache/3
[Sat Jul 18 21:41:15 2009] [error] [client 90.34.34.34] PHP Warning:  imagecreatefrompng() [function.imagecreatefrompng]: gd-png error: setjmp returns error condition in /usr/storage/www/private/mikkel.hoegh.org/modules/acquia/imageapi/imageapi_gd.module on line 44, referer: http://mikkel.hoegh.org/admin/build/imagecache/3
[Sat Jul 18 21:41:15 2009] [error] [client 90.34.34.34] PHP Warning:  imagecreatefrompng() [function.imagecreatefrompng]: 'sites/mikkel.hoegh.org/files/imagecache_sample.png' is not a valid PNG file in /usr/storage/www/private/mikkel.hoegh.org/modules/acquia/imageapi/imageapi_gd.module on line 44, referer: http://mikkel.hoegh.org/admin/build/imagecache/3

I've been trying to solve this half a day now, and the best clue I've found is another guy having the same problem – no solution there, though.

The code in question is fairly simple, it just calls imagecreatefrompng($filename);

Package versions of all the packages I can think of that might be related:

  • php5-5.2.10
  • php5-gd-5.2.10
  • png-1.2.37
  • gd-2.0.35_1,1

Any clues?

  • It could be problem with PNG image. Try very basic code with very small black and white PNG image. If that also generates same errors in log files then you could consider installing PHP from source so that modules like php_gd get updated to latest version.

    You can also try setting

    error_reporting  =  E_ALL
    display_errors = On
    

    in case they give some better error message on screen. Remember to make display_errors=Off after you have finished debugging on a production server.

    mikl : Well, it is the same with all PNG images I've tried. And as far as I can see, the error message is exactly the same whether you output it to the log or to the screen.
    Saurabh Barjatiya : Based on your distribution you would have some package manager like yum or apt. Try updating apache, php, php_gd using that package manager. If that does not works then I guess recompiling PHP from source would be the best option.
    mikl : The versions I mentioned are the latest available – I have tried the upgrade strategy already…
  • I had a recent problem similar to this. After upgrading one of the packages on my 7.2 system, the gd-driven captcha on my phpBB2 installation stopped working. I re-built all of the php ports and it fixed itself.

    I know that's a bit vauge, but sometimes things will break over months of incremental upgrades due to dependencies getting out of whack.

    mikl : Yeah, I already tried to rebuild php-gd with all of its dependencies. I think I might try something more… drastic. I have an upgrade to 7.2 I've been putting off for a while, so if I have to rebuild all my packages, that might be a good occation :)
  • No answer but since I cannot leave any comment; I have a 7.2 system with exactly the same problem and exactly the same versions. Even tried downgrading libpng to no avail. Doing a binary upgrade from 7.1. to 7.2 was extremely painless and well worth it but don't think that will fix the problem :-)

    I also installed pecl-imagick to see if I could use that instead but to my surprise I got similar errors. I tried lots of other software that depends on libpng but they could all load the images giving errors in php5-gd and pecl-imagick just fine. This made me exclude libpng, which at first I thought was the problem. My next guess is that something in the php API has changed, I will try to downgrade php and see if that helps.

    mikl : Okay, interesting – I've managed to get my image processing working by using imagemagic (the CLI util, not the pecl package). That could indicate that this might in fact be a PHP bug. *shakes tiny fist at PHP*
  • Before updating always read

    /usr/ports/UPDATING
    

    Sometimes you'll need to do recursive portupgrade i.e.

    portupgrade -fr png-1.2.37
    
    mikl : Was there anything in particular I should have noticed in UPDATING? I can't spot something related to this problem. I don't use portupgrade, but I'm running a `sudo portmaster -ru graphics/png` now to see if that'll fix it.
    mikl : Recursive rebuild did not fix the problem, sadly. I tried earlier by rebuilding all dependencies of php5-gd, but that had no effect either.
    SaveTheRbtz : maybe, as last resort, `portmaster -fa` will help?
    mikl : Thankfully, it has disappeared after I upgraded to FreeBSD 7.2. I think the decisive change was the complete rebuild of all packages. Thanks.
  • This command solve my problem:

    portupgrade -fr png-1.2.40

  • If you are using portmaster, this will work:

    portmaster -dbrR png-1.2.40
    
    From Markus

Which disk mode is most fast and reliable in vmware server 1.9 (ubuntu 64 bit guest)

I don't want snapshots and they are deactivated in the guest, which is ubuntu 64 bit.

Which disk mode is the most fast and reliable in vmware server 1.9? (I am using scsi)

I am not sure if it is "dependent" or "independent persistent". Actually I don't understand what dependent in this case means.

(I assume "independent persistent" is not the right choice because it is less safe)

So my priority is safe and fast and I don't want snapshots.

Edit: My question was not what I wanted. I want the fastest setting. If snapshots activated (while I never use them) is the fastest, then I want this. So which one is the fastest settign for virtual disks?

  • Independent and dependent refer to snapshots.

    Independent disks do not get snapshotted if a snapshot is taken while dependent disks do get snapshotted.

    If you aren't using snapshots, there's no difference in behavior practically speaking.

    Persistent and non-persistent refer to whether or not changes are written to the virtual disk permanently.

    If a disk is persistent, changes to the disk are written. If a disk is non-persistent, changes are lost when the VM is shutdown or restarted.

    I don't know that either has a specific impact on performance of the virtual disk.

    From damorg

Why exactly is --skip-slave-start recommended with MySQL `START SLAVE UNTIL`?

When issuing START SLAVE UNTIL on MySQL , I'm getting the following:

Warning: It is recommended to use --skip-slave-start when doing step-by-step replication with START SLAVE UNTIL; otherwise, you will get problems if you get an unexpected slave's mysqld restart.

Why exactly is it recommended to use --skip-slave-start? What happens if the slave indeed restarts - does it only forget the UNTIL part and replicates till the end of binlog?

  • Does it only forget the UNTIL part and replicates till the end of binlog?

    That's right.

    If a previously configured slave is started without --skip-slave-start, then it will use the information stored in master.info to automatically reconnect and continue replication like normal - that is without the UNTIL clause. Which means proceeding to the end of the binlog and waiting for new binlog events.

    Rafał Dowgird : Thanks, I hope there are no other hidden reasons :)
    From Dan Carley

Free tool for analysis of IIS logfiles?

Possible Duplicate:
Any freeware IIS log analyzer?

Hi there,

what are the best free tools available to analyze IIS logfiles?

Thanks in advance!

Open Source (Linux-) Server monitoring Software with lightweight footprint

What OSS monitoring soltions are there? I know only nagios & cacti.

Are there any real live performance monitoring tools?

  • You can take a look on Monit.

    http://mmonit.com/monit/

    From Ellendhel
  • Have a look at http://www.zabbix.com

    It's not as fully featured as Nagios but its improving pretty rapidly. Its quite lite weight both in terms of server load for output and the collectors too.

    DisabledLeopard : If you do have the spare cpu & memory consider ZenOss Core http://zenoss.org - after you get used to its user interface its really nice to use and has a lot of very cool features [like different levels of user accounts, reports etc] that so far I've only seen available in commercial apps.
    Martijn Heemels : Zenoss Core is great (my company uses it), but it's not lightweight.
  • for trend tracking - munin. you can use it as well as source of alerts for nagios.

    From pQd
  • The monitoring agent, sd-agent, for our hosted monitoring service, Server Density, is released under the open source FreeBSD license and is designed to be very lightweight.

    Martin K. : It's leightweight and neraly exactly what I've searched for. But only for one server. If you want to monitor more you have to pay! That's no open source solution.
    DavidM : The agent you install on your server is open source (FreeBSD license). The monitoring is provided through our hosted service, which is free for 1 server (& limited functionality) and then chargeable for more features and servers.
    From DavidM

How to setup an outlook rule on a remote machine(s)

I need to setup a custom Outlook 2007 rule and push it to a system(s) without user interaction. I do NOT have access to the Exchange server. How would I do this? Reg key?

Thanks, -Mathew

  • Outlook rules used to be housed on the client running Outlook in earlier versions of Outlook in a file known as a .rwz file. As of Outlook 2007 these are no longer used as they are now stored on the server instead. However the only way I know of would require the user to do an import of the .rwz file, which then copies the settings up to the server. So to answer without interaction there is no way of doing this that I know of.

    The .RWZ file would indeed exist if it was migrated from another machine or the software was upgraded from another version of Outlook.

    As far as getting it to another machine that could be done a myriad of ways, but would still require user intervention to import the .rwz

    MathewC : I want to give you as much credit as possible since you pointed out the .rwz file which does exist and work in 2007. I just need to know how to get it on a remote machine.
    John Gardeniers : Not entirely correct. At least prior to 2007 (which I can't answer for) rules could be either on the client or on the server, depending on the actions specified in the rule.
    geeklin : The question was for Outlook 2007, which stores on the server.
    John Gardeniers : Sorry, I didn't read it properly.
    MathewC : It can be imported on the local machine.
    From geeklin
  • Outlook web access (webmail) will allow you to login as that user and set up a rule on the Exchange server. :)

    : This is absolutely correct! I have done this in the past and it worked like a champ. This is also a useful method for setting up mail forwarding without user interaction or getting on the server.
    Farseeker : He can't log in as them
    Garrett : What I'm curious about is what is he trying to accomplish? Maybe if he shed some light on the what instead of the how we could offer an alternative to rules altogether? If it's just some executive who can't be bothered to set aside 3 minutes to get help setting up his rules, who cares? :P
    From Garrett
  • use the OCT (office customisation tool) to deploy Outlook 2007.

    From Nasa

Building a build server on an Atom Mini-ITX PC

I am planning on turning my 1.6Ghz Atom, 1GB RAM Mini-ITX PC into a build machine. I am thinking about having it build for multiple environments, so I was planning on using Virtual Box to create a VM environment.

Will my machine handle multiple VMs? How many should I max at?
Should I use Linux as the host and have XP as a guest or should I use XP as the host and have CC.Net run on the host (I only want to use one license of XP)?
I plan to host my SVN repositories in its own VM, does that make sense?

I will post more questions as I think of them or as they become relevant.

  • I advise against using an Intel Atom CPU for this kind of work for its lack of out-of-order execution.

    If you really want to use that machine, you should not use virtualization techniques and expect low performance. If you have the time, run tests with your expected workload.

    phsr : Do you suggest that I just set it up as a single os box, no VM, and use my build server software of choice?
    Jan Jungnickel : I updated my response accordingly.
    phsr : I'm not expecting this PC to scream, its a spare computer I have laying around and I have some software projects that I am currently working on, and figured I would build a build server for them
    osij2is : Agreed. Atom CPU is not built for this purpose. Don't try to use it out of context. Yes, power savings are a good thing, but there are better CPUs/systems for this precise type of work.
  • If your not after too much performance then it should be suitable, I would however recommend upgrading your memory to a little higher. 1GB is a little on the low side, increasing it to 2GB or even 3GB will make a heap of difference.

    I would also have the SVN running on the host as if it is running in a VM you need to have the host and VM running.

    With the current RAM configuration, you would only be able to run 1 VM, with more RAM you could run 3 or 4 VM at the same time. Keep in mind drive space, the more VMs you have setup the more space you require.

    phsr : It may have 2GB of RAM, I don't remember what it has, I do know that it maxes at 2GB
    From Lima
  • Have a look at my server: http://howto.basjes.nl/linux/installing-my-new-server

    I have it running under CentOS 5.3 64bit with VMware Server and 2 VM's (also running CentOS 5.3 64 bit). I have Vlans configured, firewall, etc. I also have an ISDN card in ti and it also serves as our home Asterisk box which allows (free of charge) VOIP calling with our family.

    Works like a charm for our home purposes.

What should I do to scale OUT a Sql2008 database?

Hi folks,

I'm about to scale UP our sql2008 database. Simple. But I might need to scale OUT our sql databases.

For a simple scale out situation (ie distributing the processing load), are there some good initial best practices? i know there's so many solutions which will be product specific -> many writes & not many reads, many read & not many write, a bit of both, etc. etc..

But for a site that's pretty read heavy (instead of write heavy), is there a common starting point? eg. grab a second sql box, add some sync thingy and off u go.

cheers :)

  • These days hardware is pretty powerful and cheap, so you can go a long way just by using a decent server. If, as you say, your database is mostly for reading, then a Poweredge 2950 with 16GB of RAM and a six 15K disk RAID5 (or 6) is a very powerful base for a SQL Server; add as many cores as you want but even twin quad cores isn't that expensive. I think the 2950 will take up to 64GB of RAM, though this will be expensive!

    Your hardware may already be this powerful. If so you're looking at a step change in power and cost, and if so I think you need better advice than just a few posts on ServerFault :-)

    JR

  • How up-to-date does the data on the other boxes need to be?

    MSSQL has a great one-way sync relationship setup. You have to be licensed for appropriate versions of SQL Server (I don't think it's included in the most basic one), but it's exceptionally easy to set up.

    The only catch is that you can only write to one location, all the other locations need to be read only. For two-way syncing (if you're going to be writing at all) it's a lot more complex.

    So in short, yes, a 2nd box with a sync thingie will work quite well, but you will also need to do your own load balancing (i.e. have one web server read one sql server, another web server read the other sql server), as they still appear as seperate instances. Otherwise you're into clustering which is another kettle of fish.

    So, this sync thingie - what is it and how do you set it up? Well, in your SQL Management Studio (SSMS) you will see a folder for "Replication" in the navigation pane, with publications and subscriptions.

    In a nutshell, you will:

    • Publish a database on the primary database server
    • Subscribe to a publication on the secondary, read-only servers
    • The subscribers will synchronise on a schedule (this can be constant for almost-instant replication)

    There's a lot of articles, so just google SQL Server Replication.

    As far as hardware goes, our primary db server is a Dual-Quad-Core with 4gb of ram. Our slaves are Dual-Core with 4gb of ram. You can buy a lot of servers at that level. Of course, it all depends on what kind of load you expect.

    Pure.Krome : @Farseeker: "but you will also need to do your own load balancing (i.e. have one web server read one sql server, another web server read the other sql server), as they still appear as seperate instances" Can u elborate on this? Are there some links to how other people have done this? The rest of the answer makes sense :)
    Farseeker : Sure. SQL Server A will be a full read-write 'master', or 'publisher', and SQL Servers B & C will be read-only slaves 'subscriber'. Lets say you also have three web servers, WEB-A, WEB-B, WEB-C. If each web server points to SQL-A then SQL-B and SQL-C will sit around doing nothing. You would have to configure WEB-B and WEB-C to read from SQL-B and SQL-C, and write to SQL-A. If you're after a solution where all web servers connect to say, SQLSERVER and a broker then chooses the server with the least load, that's something else alltogether.
    Farseeker : Or, an easy (but inefficient) way to do this, is in your application, have two DB connections. The 'DBWRITE' connection will ALWAYS connect to SQL-A and 'DBREAD' will cycle through SQL-A, SQL-B and SQL-C (Switching to a different one each time it connects/reconnects). Like I said, it's not going to be particularly efficient but it could work.
    Pure.Krome : are there any links to projects that have this or pages that elaborate on this .. with hopefully some info against a .NET project using SQL Server?
    Farseeker : Hmm, I've been re-reading your question and brushing up on my SQL Replication settings for a project at work and I think you'd be better off with an SQL Merge replication. I've never used it myself, but here's some reading: http://msdn.microsoft.com/en-us/library/ms151329.aspx and http://www.databasejournal.com/features/mssql/article.php/1438231/Setting-Up-Merge-Replication-A-Step-by-step-Guide.htm
    From Farseeker
  • hi,

    well this is difficult to answer question in that sense, that you (devs) had to think about this problem while designing database. But take a look into shared databases for high read/write ratio this might be easiest to implement. Other solution might be replication.

    enjoy, m

    From Martynas
  • Hi, the question is more complex that a few posts can answer. There are too many options how to scale you servers, just to name a few: failover cluster, log shipping, replication, database mirroring.

    I would recommend you a wonderful book that can be downloaded here : Pro SQL Server 2005 High Availability

    Hope you will find answers to your questions there

    Pure.Krome : Cheers for the link. Unfortunately, that book is not part of the download, in the link provided :( damn!
    From Bogdan_Ch

Using more recent kernel for Xen Dom0 in production.

Does anyone have experience running Xen dom0 on a more recent kernel than the stock 2.6.18?

What host distro are you running? What release of Xen (or hg/git changeset)? What set of patches are you using on kernel source? (Has anyone got the pvops dom0 stuff working in production or is it better to stick with something like the SUSE patches?

Any other tips and tricks to running a more recent kernel version as dom0 would be helpful.

  • I run Debian Lenny with it's Xen kernel patches and userspace (Xen 3.2.something on a 2.6.26 kernel), works fine. No tips and tricks required.

    TRS-80 : For the record Debian is using the SUSE patches: http://wiki.debian.org/Xen#Dom0.28host.29
    From womble
  • You might want to try OpenSolaris as a dom0 - their Xen is developed in sync with the kernel, so no need to rely on patches.

    thelsdj : I actually spent yesterday going down this road. Managed to install OpenSolaris on a VM, then used AI to install it on a server. Got xVM installed and booting. Have yet to build any guests. The main problem I have is that it can't be a drop-in replacement for Linux on our existing servers because it can't use our existing LVM2 storage. Maybe some day down the road when we build new servers.
    From TRS-80
  • I wanted to update this question and say that recent xen-unstable source compiles a 2.6.31 pvops kernel by default and my guess is that the next release of Xen will be using a much more recent kernel.

    Lee B : pvops sounds familiar. I think that might the kernel branch that rackspace/slicehost are using. Either way, rs/slicehost are using xen kernels up around 2.6.28, so it can be done.
    Antoine Benkemoun : Xen 4.0 has been released and uses pvops as default !
    From thelsdj
  • We're currently on 2.6.31 from gentoo-xen-kernel for our Gentoo dom0 and domU hosts in production. Which, like a lot of others ports, are based on the patches from OpenSUSE.

    From Dan Carley

How do you configure a NetApp Filer to use LDAP for username/password/uid?

We have a NetApp Filer, and want to access it via Samba/CIFS, and have it use the username/password/uid available in our OpenLDAP server. We already do this successfully with Samba 3 against OpenLDAP, so we have all the appropriate posix attributes as well as NT/LanManager password attributes.

The goal is that a user can mount their directory in Windows with the same username/password as their Linux login, and files created there will have the correct uid so it just works when they go back to Linux.

Again, we have all this working with Samba/OpenLDAP/Linux, so the question is not about that configuration. It's about configuring a NetApp against such a system.

  • I believe this is what you want:

    options ldap.base dc=example,dc=com
    options ldap.servers ldap.example.com
    options ldap.enable on
    edit nsswitch.conf

    Edit /etc/nsswitch.conf

    hosts: files dns
    passwd: ldap files
    netgroup: ldap files
    group: ldap files
    shadow: files nis

    This now link requires a valid login

    As always its worth testing things on the simulator first :)

    Jared Oberhaus : Thanks! I'll try that out.
    James : If you have any issues then edit the question.
    From James

Windows Server 2008, "The requested operation requires elevation"

Sorry for the newbe question, but I'm a developer who hasn't touch a Windows Server in years. A recent project has required me to setup and configure a development VM running Windows Server 2008, IIS 7, and SQL Server 2008 Express. I need to run commands in the command window and get the following message:

"The requested operation requires elevation"

Reminds me of SUDO in Ubuntu, what's the command to elevate?

  • This is a stab in the dark, but can you right click on the command icon, and select "Run as" and select an account with administrator privileges?

    craigmoliver : I'm trying to run a command
    craigmoliver : oh, nevermind, I get it
    craigmoliver : I right clicked and found "Run as Administrator", and that worked, thanks!
  • After you type cmd in the "run" box you can press Ctrl-Shift-Enter.

    From Adam Brand
  • Alternatively, just run the command prompt itself as Administrator, and you won't need to worry about needing elevation. You will, however, need to worry about having it.

    From RainyRat

How do I run a process remotely from inside a shell script?

Hello. What I am trying to do is to run the shell script 'ABCDE' on one machine, but use another machine to support it. ABCDE is an all-in-one ripper/encoder/tagger script for turning CD's into digital files. I have one machine which has a fast CD drive, but a slow processor, and another machine with a fast processor but a slow CD drive. I have set up SSH tunneling between the two, and also a SSH-fs, so the ripped files can be shared by both machines.

There is a configuration file in my home directory that allows me to set the path of the encoder. Here are the important lines (there are other relating to CD drive location, output format munging, etc):

OGGENCODERSYNTAX=oggenc         # Specify encoder for Ogg Vorbis

OGGENC=/usr/bin/oggenc          # Path to Ogg Vorbis encoder

OGGENCOPTS='-q 6'               # Options for Ogg Vorbis

OUTPUTTYPE="ogg"                # Type of file to create

When I try to put some indication to the script that the path to the encoder is elsewhere, I get problems.

For example:

OGGENC=`ssh WWW.XXX.YYY.ZZZ /usr/bin/oggenc`

It runs the oggenc command before the rest of the shell script. And of course, since the oggencoder has no input at that moment, it gives an error message, and the program moves on to use a default setting from /etc/abcde.conf.

I have tried any number of combinations of " , ' , , \' , \ , \" , etc, but either it doesn't work at all, or it exectues the oggencoder too early.

Please let me know what I'm doing wrong, or if this can even be done at all.

Thanks so much!

  • Bear with me as I understand your setup.

    If Machine-A is the fast-CD device and Machine-B has the fast-processor.
    You seem to be running the job from Machine-A (to be close to the drive).
    You have shared the file-space such that the ripped files
    can be seen from Machine-B to be encoded and probably tagged too.

    Now your problem is to fire the encoder operation remotely from Machine-A.

    ssh user@Machine-B exec /shared/path/encodeScript /shared/path/$filename
    

    Where,

    • $filename is the file to be encoded and
    • /shared/path is visible across the machines

    should do individual file encoding on the remote Machine-B.

    nik : Your WAV file would be the filename reference. The script could delete the WAV or the caller can delete locally from Machine-A after the encoder remote call returns.
    From nik

Webbased Log Parser for mod_log_sql

Our frontend webservers are now logging all our weblogs into mySQL using mod_log_sql, freeing up thousands of "AccessLog" directives in our apache config (we're running between 600-900 virtual hosts on our servers now)

That being said, I'm trying to find a reasonable weblog analyzer that works with mod_log_sql. I've used webalizer & awstats for years and i really like them, however neither tool supports sql-based logging.

It doesn't have to be live-time, but it does atleast have to be able to grab data from a database table.

Anyone have any suggestions?

  • There is a php script called Skeith that does what you want.

    Go here to download http://skeith.sourceforge.net/

    Here is a snip from the site:

    Skeith is a simple log analyzer and reporter. Specifically, Skeith works for the mod_log_sql module for Apache (it should work for mod_log_mysql too, but thus far testing has only been done with mod_log_sql).

    Skeith's main feature that sets it apart from other log analyzers it that it can generate the log file for a given day or month on-the-fly. This way the sysadmin can look at the exact requests that may be questionable or harmful.

    From Freddy
  • I would not recommend storing logs in any kind of SQL database. SQL storage engines are simply not fit for that, as the amount of data increases (as it surely will with near 1000 virtual hosts), the write speed will suffer from severe slowness. Deletion from the database is also a painful operation, as the table will get fragmented, further increasing read/write latency and decrease speed.

    It you insist storing logs into SQL database, You will have to do your best filtering out as much unimportant data as much as you can.

    LapTop006 : Actually even fairly basic machines (eg, laptops) can do ~100k lines a minute logging to MySQL. Yes it adds overhead, but really not that much.
    : agree with the above comment: We use postgreSQL to log our web logs and we have two PostgreSQL servers using RAID10 to handle apache logs, among other databases, to log for over 900 web sites across 6 servers. If/When we run up against a database performance issue, there are caching tools available such as PGPOOL and MEMCACHE for databases. Although, asadmin's point is valid in that it has the potential to be a bottleneck if not monitored and/or setup correctly.
    asdmin : LapTop006: of course raw writing a database is possible with incredible speed (as long as indexes are not used and data are not deleted), but as soon as data gets deleted (fragmented tables) or indexes introduced (inserting nightmare), or a fairly simple "like" string comparison occurs (remember, raw text), the database turns out to be useless. Of course, archiving from SQL tables is a pain, as no one would like to let an SQL database grow forever. Archiving also means deletion, so fragmented tables are back in business...
    From asdmin

Upgrade Mediawiki on a Windows Server 2003 box?

I was wondering if anyone knew the location or had any experience updating media wiki to the newest version

1.15.1

I see lots of documentation for Linux installs but i can't seem find any for Windows.

I just "inherited" this box and don't usually handle this type of stuff so any step by step guide would be great.

  • Most of the upgrade stuff happens in the database, the operating system on the server shouldn't make a massive difference.

  • You upload the new files then run the upgrade script as described in the upgrade readme file. That file/documentation isn't linux based,it's the same no matter what operating system your install is running on.

    From p858snake
  • I have upgraded MediaWiki on W2K3 many times - the official upgrade instructions are fine. Go for it :)

    From Andrew H

I can't find firewall on Windows Server 2003 Enterprise Edition

I Just want to open the 1433 port for SQL Server locally, please I dont know who config this server so when I try netsh firewall , prompt saids "the command not exits" by the way routing and nat are disabled

I cannot find Windows Firewall Option on this Server, thats the most rarely!

Server Version : Windows Server 2003 Enterprise Edition (5.2, Build 3790) without any Service Pack

Thanks!

alt text

  • Go to the Control Panel and double-click on Windows Firewall. It will ask you if you want to start ICS, say yes, then you should be able to add the exception there. Don't forget to add exceptions for RDP and www if you use it.

    Angel Escobedo : I cant find Windows Firewall on Control Panel thanks
    mrdenny : Not sure how, but it sounds like the Windows Firewall has been removed. Does the Windows Firewall service exist in the service list?
    Angel Escobedo : omg not exists!
    Adam Brand : Hm that is no good. Do you have an Anti-Virus program installed? If not, install AVG or something and give it a scan. The firewall is not something that can be easily installed/uninstalled.
    Angel Escobedo : When I begin with this since 2 months ago... MacAffe was installed nowadays the AV is NOD32
    Angel Escobedo : by the way, the main IP is protected by Perimetal Firewall and a Unix Enrouting, so in short terms "is a little secure for us" but I want to try open ports manually how can I achieve this?
    : If you can't seem to find the firewall on the server then what leads you to believe the box has firewall rules? Could it be that SQL server is either not listening on port 1433 or not using TCP/IP as a transport? Have you tried using localhost:1433?
    Adam Brand : Sounds really bad to me...I would consider nuking the server if the firewall is gone.
    From Adam Brand
  • Try opening the Windows Firewall UI from the Start Menu's Run command:

    • Start-> Run
    • firewall.cpl

    You should have the Windows Firewall now.

    If the Windows Firewall isn't able to run on your system, consider updating your system to Windows 2003 Service Pack 2 or better.

    Angel Escobedo : please check the image above
    pcampbell : Thanks to Albic, it appears there IS no Windows Firewall pre-SP1. Of course, it's always recommended to keep up with security fixes. Are you able to apply those service packs?
    From pcampbell
  • According to the Wikipedia article the Windows Firewall for Windows Server 2003 was introduced with Service Pack 1. Therefore it is not included in the RTM build which you are running.

    After the installation of Service Pack 1 (or better Service Pack 2) you should be able to access the feature.

    This Technet page describes the changes to the firewall in Service Pack 1 in more detail.

    From Albic
  • Looks like you may not have the Windows Firewall afterall... :)

    But your main goal seems to be to enable connectivity to your SQL Server instance. Since I don't know what version/edition of SQL Server you have but I am guessing that you have SQL 2005 as 2008 would not install on Win2003 RTM. So here is a useful link: http://support.microsoft.com/kb/914277/en-us

    From