Thursday, January 27, 2011

Deploying OSX-Bootcamp-Windows 7

I have to deploy 5 OSX Macs with dual booting via Bootcamp into Windows 7.

What's the most efficient way to do this?

Many thanks

  • I recently did exactly this on 20 OSX systems. I just created a custom image, ran sysprep, captured the image with ImageX and then booted each Mac into the PE to apply the image with ImageX. It worked great! We tried to use Deploy Studio but it made a mess of the partition and left me with an unusable Win7 install. I'm curious to see if anyone has a better solution though, as I'm always up for improving efficiency!

    From
  • I would simply use dd in the Terminal on Mac OS X to create an image of the whole hard drive. You can then use the Mac OS X DVD that also contains a Terminal and dd to write the image you made (which you can store on a USB HDD) to the hard drives of the other Macs (which I assume are identical).

    From

How to configure Windows user accounts for ODBC network with NT authentication?

I'm trying to create a connection to an SQL Server database from the ODBC Data Source Administrator using "Windows NT authentication using the network login ID". Both server and client are running Windows XP.

It appears that any account with administrator privileges can add the data source on the server*, though connection attempts from the client result in error messages that suggest it is trying to authenticate using a guest account.

I found a Microsoft support page that says:

For SQL Server...: connect using the impersonated user account.

But it doesn't offer advice about how to do that.

How do I impersonate a user account on the server?

or (since it sounds like that would lead to an unfortuante squashing of privileges and loss of accountability):

How do I give an account on the client privileges on the server database and then ensure the client attempts authentication with the privileged account and not with a guest account?


I'm aware that I'm providing rather sparse information. This is because I'm in unfamiliar territory and don't know what's pertinent. I'll attempt to add any requested information as quickly as possible.


*I'm planning on tightening privileges straight after I get it working as it stands.

  • It sounds like you'd get some benefit from documentation describe the "basics" behind the security system in Microsoft SQL Server.

    I'd have a look at these docs relating to principals, permissions, and securables to get a feel for how you can apply permission for users/groups to access objects in a granular fashion in SQL Server.

    Those docs are a bit abstract, but they're the nitty-gritty details.

    Getting away from Microsoft, there's a really nice "crib sheet" that Robyn Page wrote that gives good background on the security model.

    For a 10,000 foot view, what you're looking to do is create Active Directory groups (which you'll make users members of) to which you'll grant various permissions on resources ("securables") hosted by the SQL Server computer. What specific permissions and securables you'll be dealing with depends on your specific application. If certain users need UPDATE access to certain tables, or the ability to execute certain stored procedures, you'll use SQL Management Studio (or, ugh, Enterprise Manager, if you're on SQL Server 2000 or older) to grant the desired permissions.

    Ian Mackinnon : Very helpful, thank you. Especially for mentioning "Active Directory groups", which seems like the key concept I needed a pointer to and would never have guessed the name of! I'm gonna get reading...
  • Are the SQL server and the Xp workstation in the same domain? if this is a direct xp workstation to SQL it should be using the credentials specified (from a differing domain) not guest. The article you point to is talking about reporting services- that is a whole different beast. The 2 important factors there are:

    1. Ensuring that the users accounts have access to the proper database
    2. Ensuring that the report server computer account is trusted for delegation.

    I suspect that if this is a reporting server issue question that #2 is your problem. In order for a server to use credentials from another server it needs to delegate authentication. The steps required are listed here. Once trusted a server can then impersonate a user account.

    From Jim B

"Address already in use" error from socket bind, when ports are not being used

I can not bind (using C or python sockets) to any port in the range: 59969-60000

Using lsof, netstat and fuser I do not see any processes using these ports.

Other ports such as 59900-59968 and 60001-60009 I can bind to them.

My OS: is CentOS release 5.5 (Final) 2.6.18-194.3.1.el5

There must be something missing? Anyone have any idea how to debug why this port range is not usable?

Cheers, Ivan

  • I would check local firewall settings. Since iptables is not a separate process it will not usually show up in lsof, netstat, and fuser.
    What is the output to "sudo iptables -L -n"?

    From flashnode

Can a server run both haproxy and nginx? How would they both work?

Can a single server run both haproxy and nginx at the same time?

I guess I would have to run nginx on a different port, and then route specific traffic to nginx?

Say I have 2 domains, 1 requires nginx and the other requires another service.

Can haproxy send requests for domain#1 to nginx? and domain#2 to another server?

  • Yes you can, in fact this was what stackoverflow was doing for a while. Basically you pick one to be in front of the other. The one in front would be on port 80 and the one behind it will be on some random port of your choosing. You would just treat the second load balancer as if it was a web server behind the load balancer. Just keep in mind you will probably want to enable the X-Forwarded-For feature so you actually see the client IPs.

    If you want them both to be on port 80 and not be in front of each other than you can bind them each to a specific IP address.

    See this post for a similar situation: Nginx (for static files) and Apache (for dynamic content)?

Connecting Linux system to switch from 2 interfaces

To get proper redundancy, we've installed 2 switches in our network and connected them to eachother. We now want to hook our servers to both switches. Since all servers have 2 ethernet ports, this should be possible.

The big problem is that we want to do this using just 1 IP address per server. Does anyone know how to configure Linux (and Windows too actually) so it supports this, at the same time avoiding any form of looping ? I know it's possible to just set the IP on both interfaces, but that causes ARP issues when disconnecting 1 of the switches.

  • Use link aggregation, which is otherwise known as bonding or teaming. The exact methods of implementation vary depending upon the OS and distribution.

    It will allow you to use both interfaces as a single interface, which will provide load balancing and enable high availability for the network interfaces. It's highly configurable depending upon your exact specification.

    ErikA : Agreed. One thing to note, though, is that depending on which LA tech you use, it may require special configuration on the switch ports.
    From Warner
  • With Linux the simplest bonding method to use is active-backup mode when connecting to two different switches. With this only one is active at a time and you can set which one you want to have priority. This method requires no special configuration on the switches.

    With windows you are going to need to install a utility that comes from the vendor of your NICs. For Broadcom NICs you want Broadcom Advanced Control Suite (BACS). Intel has a similar utility. The bonding on Windows can cause problems with certain things (I.E. not a good idea on domain controllers from my experience).

    Oh, and do set up STP or RSTP to prevent switching loops ... it is really quite easy usually. Often something like spanning-tree

  • In linux world you need to use the Network Bonding (a kernel module named bond). In its documentation you can read all that you need to properly configure it in a redundant setup.

    If I recall correctly, you also need switch support for it (in the form of 802.1D Spanning Tree Protocol or the like), so you won't be able to do that with standard unmanaged low-end switches. But I may be wrong here, please go check the documentation of the bond support and of you switch.

    I know that Windows Server systems have a similar bonding feature. I don't know its details nor its availability on desktop systems.

    From Luke404

Windows Server 2008 R2 DC and MSDN

This question would be for anyone that has downloaded from MSDN Windows Server 2008 R2 Data Centre edition and set it up in a Hyper-V / VM environment (Note: this is for a development/testing (non-production) environment!).

I see if you purchase the Data Centre edition you get unlimited virtual images, where as if you purchase Enterprise you get licenses for the host + 4VM instances. (taken from this comparison chart)

I know with MSDN you get 'x' keys for each instance of the O/S that you can download (is it 5 or 10 for 2008 Server?). So taking it to the extreme I am wondering if you could install 10 instances of the data centre edition and have each one running 20 virtual instances? As long as it's not being used for production then that would be OK.

To put this in context, we are looking to set up a virtualised development environment that may have the requirement for 5 instances. We are wondering if that would need 5 keys (one for each instance) or could it be done through a single Data Center installation?)

Hopefully that makes sense. As MSDN allows you to set up for development / testing this does feel like something common.

  • I think you would need indicidual keys as typically with a datacenter install, you would set up a KMS to handle activations. You should have plenty of keys as anyone that needs to touch those boxes is required to have an MSDN license, and if it's just for you you should be able to get the number of activations raised (as MSDN is unlimited number of devices per person). Note that MSDN licenses are for development only (specifically design, develop, test, or demonstrate) so you can't use an MSDN license as a backup to production.

    standard disclaimer: Talk to your Microsoft licensing specialist for specifcs about your license agreement.

    Paul Hadfield : @Jim: Definitely aware of the no production rule. also because it will be a "managed" development environment our IT support guys that have rights to those machines will also need their own Visual Studio + MSDN versions Technet or MSDN on it's own doesn't actually come with SQL Server I found out today. But it could still work out cheaper than buying licenses for o/s + SQL.
    Jim B : that depends if this really is for testing then you can download a trial edition that will run for 180 days of SQL and also use trial editions of windows. Your MSDN alows you to install windows for your purposes and the trioals allow you to deploy to a "QA" build
    From Jim B

Sonicwall vpn user cannot be accessed by VPN tunnel

I have a user accessing a Sonicwall NSA 2400 via vpn (Site A). This Sonicwall has a VPN tunnel to another site (Site B). The user can ping servers at Site B, and access websites located on them, etc. People on the physical LAN at Site A can ping and telnet to the vpn user. However, the problem is that the servers located at Site B cannot contact the VPN user. They can contact any computer on the LAN, but no vpn users. I have done a packet capture, and anytime I ping the vpn user from the servers at Site B, the packet is "Consumed" on the firewall. I am pretty good with networking concepts, but this has me stumped.

  • There are a number of possible reasons for this.

    1. Routing If the Sonicwall performs SNAT for any traffic going from Site A to Site B (possibly due to overlapping subnets), then traffic from the vpn user towards the servers at B will work and the reverse will also work, because the original address has been replaced by the SNAT with the address of the Sonicwall, which performs the reverse translation, too. However, if the servers on Site B do not know where to route the traffic for the VPN user, it will go out through the default gateway and therefore never reach the VPN user. Remember: VPN users often have IP addresses in a subnet very different from the subnet used on site A, so that the VPN router can route the packets to the individual clients.
    2. Firewall It is possible that your firewall has rules that allow NEW packets (as in state NEW) to go from site A to site B, but not the other way around. That would also explain your situation.
    3. VPN configuration More of a theoretical one. Some VPNs can be configured such that the VPN clients are not reachable from the target LAN. Since you say the people on site A can reach the VPN user, this seems not to be the problem.

    I would suggest you post your firewall rules and your routing tables, and then we can inspect that and advise further.

    From wolfgangsz

MSDN vs. Technet

What is the difference between a Technet subscription and an MSDN subscription?

  • This similar question from Stack Overflow may give you a starting point: TechNet or MSDN Subscription?

    From Glenn
  • MSDN is for developers and has applications like Visual Studio. Technet is for system admins and offers applications like Sharepoint and Exchange.

    There is a decent amount of overlap, such as with OSes, but there is a difference in applications offered based on their target.

    STW : ...if you're on ServerFault; you want TechNet. If you're on StackOverflow, you want MSDN.
    MarkM : Hahaha pretty much
    From MarkM
  • Don't forget as well about the Microsoft Action Pack...see here for a good description:

    http://www.petri.co.il/ms%5Faction%5Fpack%5Fsubscription.htm

    It's an excellent value as well.

    From TheCleaner
  • I've just been looking into this too and come across this question, so thought I'd add an update for two points that are relevant for my situation.

    1. Technet is for infrastructure evaluation testing only, it can not be used for "testing related to the software development process." (taken from "Usage for testing scenarios³" in referenced link.

    2. To use SQL Server in a development / application testing environment you must have Visual Studio 2010 + MSDN. The standalone MSDN comes with o/s versions only and does not have access to SQL Server downloads. See "Software for Development and Testing" in this comparison table.

    So, taking the above into account and the latest info in the MS VS2010 licensing white paper if you are setting up a development / testing environment (i.e. non production) then anyone that "touches" (MS definition) the environment needs an MSDN - this appears to mean anyone that has "log on" access to maintain the o/s and/or deploy software/applications. However any end users / testers that are merely testing the (installed) applications do not need MSDN subscriptions - as long as they aren't using the environments for any production related tasks (it must be testing/evaluation only). For reference, page 21, “Demonstration Using Terminal Services” and “Acceptance Testing” of white paper covers this.

winSCP not listing directory problem

I'm trying to use winSCP to FTP my server using FTPs and then synch with a backup folder. I have had this working fine from a PC on my work intranet (i.e. the same domain) however when I try to set it up off site (which is my whole point) the same script fails. The FTP log is as follows..

. 2010-09-09 15:28:30.952 --------------------------------------------------------------------------
. 2010-09-09 15:28:30.952 WinSCP Version 4.2.8 (Build 818) (OS 5.2.3790 Service Pack 2)
. 2010-09-09 15:28:30.952 Login time: 09 September 2010 15:28:30
. 2010-09-09 15:28:30.952 --------------------------------------------------------------------------
. 2010-09-09 15:28:30.952 Session name: user1@myserver.nhs.uk
. 2010-09-09 15:28:30.952 Host name: myserver.nhs.uk (Port: 21)
. 2010-09-09 15:28:30.952 User name: user1 (Password: Yes, Key file: No)
. 2010-09-09 15:28:30.952 Tunnel: No
. 2010-09-09 15:28:30.952 Transfer Protocol: FTP
. 2010-09-09 15:28:30.952 Ping type: C, Ping interval: 30 sec; Timeout: 15 sec
. 2010-09-09 15:28:30.952 Proxy: none
. 2010-09-09 15:28:30.952 FTP: FTPS: Explicit SSL; Passive: No [Force IP: No]
. 2010-09-09 15:28:30.952 Local directory: default, Remote directory: home, Update: No, Cache: Yes
. 2010-09-09 15:28:30.952 Cache directory changes: Yes, Permanent: Yes
. 2010-09-09 15:28:30.952 DST mode: 1
. 2010-09-09 15:28:30.952 --------------------------------------------------------------------------
. 2010-09-09 15:28:30.968 Connecting to myserver.nhs.uk ...
. 2010-09-09 15:28:30.984 Connected with myserver.nhs.uk, negotiating SSL connection...
< 2010-09-09 15:28:30.999 220 Microsoft FTP Service
> 2010-09-09 15:28:30.999 AUTH SSL
< 2010-09-09 15:28:31.031 234 AUTH command ok. Expecting TLS Negotiation.
. 2010-09-09 15:28:31.187 SSL connection established. Waiting for welcome message...
> 2010-09-09 15:28:31.187 USER user1
< 2010-09-09 15:28:31.218 331 Password required for user1.
> 2010-09-09 15:28:31.218 PASS ********
< 2010-09-09 15:28:31.234 230 User logged in.
> 2010-09-09 15:28:31.234 SYST
< 2010-09-09 15:28:31.265 215 Windows_NT
> 2010-09-09 15:28:31.265 FEAT
< 2010-09-09 15:28:31.281 211-Extended features supported:
< 2010-09-09 15:28:31.281  LANG EN*
< 2010-09-09 15:28:31.281  UTF8
< 2010-09-09 15:28:31.281  AUTH TLS;TLS-C;SSL;TLS-P;
< 2010-09-09 15:28:31.281  PBSZ
< 2010-09-09 15:28:31.281  PROT C;P;
< 2010-09-09 15:28:31.281  CCC
< 2010-09-09 15:28:31.296  HOST
< 2010-09-09 15:28:31.296  SIZE
< 2010-09-09 15:28:31.296  MDTM
< 2010-09-09 15:28:31.296  REST STREAM
< 2010-09-09 15:28:31.296 211 END
> 2010-09-09 15:28:31.296 OPTS UTF8 ON
< 2010-09-09 15:28:31.312 200 OPTS UTF8 command successful - UTF8 encoding now ON.
> 2010-09-09 15:28:31.312 PBSZ 0
< 2010-09-09 15:28:31.343 200 PBSZ command successful.
> 2010-09-09 15:28:31.343 PROT P
< 2010-09-09 15:28:31.359 200 PROT command successful.
. 2010-09-09 15:28:31.359 Connected
. 2010-09-09 15:28:31.359 --------------------------------------------------------------------------
. 2010-09-09 15:28:31.359 Using FTP protocol.
. 2010-09-09 15:28:31.359 Doing startup conversation with host.
> 2010-09-09 15:28:31.359 PWD
< 2010-09-09 15:28:31.390 257 "/" is current directory.
. 2010-09-09 15:28:31.390 Getting current directory name.
. 2010-09-09 15:28:31.390 Retrieving directory listing...
> 2010-09-09 15:28:31.390 TYPE A
< 2010-09-09 15:28:31.406 200 Type set to A.
> 2010-09-09 15:28:31.421 PORT 10,222,54,3,6,38
< 2010-09-09 15:28:31.437 200 PORT command successful.
> 2010-09-09 15:28:31.437 LIST -a
< 2010-09-09 15:28:31.468 150 Opening ASCII mode data connection.
. 2010-09-09 15:28:46.968 Timeout detected.
. 2010-09-09 15:28:46.968 Could not retrieve directory listing
* 2010-09-09 15:28:46.968 (ESshFatal) Lost connection.
* 2010-09-09 15:28:46.968 Timeout detected.
* 2010-09-09 15:28:46.968 Could not retrieve directory listing
* 2010-09-09 15:28:46.968 Opening ASCII mode data connection.
* 2010-09-09 15:28:46.968 Error listing directory '/'.
. 2010-09-09 15:28:51.999 Connecting to myserver.nhs.uk ...
. 2010-09-09 15:28:52.015 Connected with myserver.nhs.uk, negotiating SSL connection...
< 2010-09-09 15:28:52.031 220 Microsoft FTP Service
> 2010-09-09 15:28:52.031 AUTH SSL
< 2010-09-09 15:28:52.062 234 AUTH command ok. Expecting TLS Negotiation.
. 2010-09-09 15:28:52.140 SSL connection established. Waiting for welcome message...
> 2010-09-09 15:28:52.140 USER user1
< 2010-09-09 15:28:52.156 331 Password required for user1.
> 2010-09-09 15:28:52.156 PASS ********
< 2010-09-09 15:28:52.187 230 User logged in.
> 2010-09-09 15:28:52.187 SYST
< 2010-09-09 15:28:52.202 215 Windows_NT
> 2010-09-09 15:28:52.202 FEAT
< 2010-09-09 15:28:52.234 211-Extended features supported:
< 2010-09-09 15:28:52.234  LANG EN*
< 2010-09-09 15:28:52.234  UTF8
< 2010-09-09 15:28:52.234  AUTH TLS;TLS-C;SSL;TLS-P;
< 2010-09-09 15:28:52.234  PBSZ
< 2010-09-09 15:28:52.234  PROT C;P;
< 2010-09-09 15:28:52.234  CCC
< 2010-09-09 15:28:52.234  HOST
< 2010-09-09 15:28:52.234  SIZE
< 2010-09-09 15:28:52.234  MDTM
< 2010-09-09 15:28:52.234  REST STREAM
< 2010-09-09 15:28:52.234 211 END
> 2010-09-09 15:28:52.234 OPTS UTF8 ON
< 2010-09-09 15:28:52.265 200 OPTS UTF8 command successful - UTF8 encoding now ON.
> 2010-09-09 15:28:52.265 PBSZ 0
< 2010-09-09 15:28:52.281 200 PBSZ command successful.
> 2010-09-09 15:28:52.281 PROT P
< 2010-09-09 15:28:52.312 200 PROT command successful.
. 2010-09-09 15:28:52.312 Connected
. 2010-09-09 15:28:52.312 Doing startup conversation with host.
. 2010-09-09 15:28:52.312 Getting current directory name.
. 2010-09-09 15:28:52.312 Retrieving directory listing...
> 2010-09-09 15:28:52.312 PWD
< 2010-09-09 15:28:52.343 257 "/" is current directory.
> 2010-09-09 15:28:52.343 TYPE A
< 2010-09-09 15:28:52.359 200 Type set to A.
> 2010-09-09 15:28:52.359 PORT 10,222,54,3,6,40
< 2010-09-09 15:28:52.390 200 PORT command successful.
> 2010-09-09 15:28:52.390 LIST -a
< 2010-09-09 15:28:52.406 150 Opening ASCII mode data connection.

This fails whether I run from GUI or a previously tested and working scripted version. It looks from the log like theres a problem with a timeout on the directory listing, presumably it works locally as less lag

Any ideas if this is a winSCP setting (and if so where) or on the FTPserver side (Windows Web Server 2008 R2) ?

  • I ran into this problem when I forgot to open the extra ports that Passive FTP mode requires.

    Basically, you need to open/allow a range of ports through your firewall in addition port 21 that you have already opened.

    It looks like you are using Microsoft's FTP Server. Microsoft has a support page with instructions here.

    When I did this, I opened port 21 for the control port, and then arbitrarily chose ports 65000-65050 for the Passive FTP Data. Your range will vary based on the number of concurrent users/sessions you need or expect (more concurrent users/sessions require more open ports) and any other ports that are already open for other applications.

    From minamhere

Virtualizing OpenSolaris with physical disks

I currently have a OpenSolaris installation with a ~1Tb RaidZ volume made up of 3 500Gb hard drives. This is on commodity hardware (ASUS NVIDIA based board on Intel Core 2).

I'm wondering whether anyone knows if XenServer or Oracle VM can be used to install 2009.06 and get given physical access to the three SATA drives so that I can continue to use the zpool and be able to use the Xen bits for other areas.

I'm thinking of installing the JeOS version of OpenSolaris, have it manage just my ZFS volume and some other stuff for work(4GB), then have a Windows(2GB) and Linux(1GB) VM (theres 8Gb RAM on that box) virtualised for testing things.

Currently I am using VirtualBox installed on OpenSolaris for the Windows and Linux testing but wondered if the above was a better alternative.

Essentially,

3 Disks -> OpenSolaris Guest VM, it loads the zpool and offers it to the other VMs via CIFS.

  • I would suggest you just look at the xVM Hypervisor (http://hub.opensolaris.org/bin/view/Community+Group+xen/WebHome). This would allow your existing OpenSolaris install become the hypervisor for other OS but also allow you to use ZFS as the backend disk for the virtual disks of the VMs.

    notpeter : Xen in OpenSolaris is no longer really being maintained, especially as a Dom0. I can't find documentation to back this up, but when I asked the question of an Oracle virtualization dev at an OpenSolaris user's group six months ago, he said outright Zones/LDoms are the focus and Xen had little or no ongoing development resources.
    From Thomas G
  • If your processors supports VT-x and your chipset supports VT-d, you might want to consider VMWare ESXi. VMDirectPath (aka IOMMU or VT-d) lets you attach a physical PCIe device (or PCIe-PCI bridge and all it's attached PCI devices) to an individual VM. I use VMDirectPath to attach my LSI SAS card to Nexenta so ZFS gets direct access to disks. My Windows/Linux VMs access storage from OpenSolaris via CIFS/NFS without a problem, although their boot vmdks are on a VMFS formatted disk off my motherboard's onboard SATA.

    From notpeter
  • No, you can't give a guest VM shared access to the zpool. What you can do is to share zfs file systems from the dom0 (via CIFS) to your guest VMs.

    From Martin

About what would it cost to run a few low traffic blogs on the new Amazon EC2 Micro Instances?

I'm thinking about ways to start a WP multisite a couple of blogs. It's going to start small, but hopefully grow over time.

First I was thinking about a VPS, but now I read about their cheap cloud computing.

I read they cost $0.02/h of CPU time, is that calculated for CPU usage, or just for every hour my server is on?

Do I need additional things to run Wordpress? Like storage and database stuff.

  • You could theoretically run wordpress on this. You pay for every hour the server is on. You would have to login and install Wordpress from scratch. You are getting a freshly installed system so you have to install the database, apache, copy wordpress in there, etc. You won't get any sort of control panel like you would get with VPS.

    Amala : Yes, reading your question again, if you just need wordpress, go for shared hosting. If you really need a VPS, i would go with some cheaper VPS hosting. Amazon EC2 is not stable enough
    From Amala
  • IF you are new to hosting/wordpress you might want to look at a good shared hosting company such as SurpassHosting or something along those lines. You can install as many wordpress blogs as you want with a click of a button.

    If you are comfortable installing a webserver, database server and installing wordpress then yes Amazon would work just fine.

    With Amazon you pay per hour + bandwidth +storage if you get additonal storage (S3).

    $!5 per month give or take for the cpu usage + bandwidth.

    From Luma

Is there a way to throttle syslog

Hi

We use Log4J with its SyslogAppender to send messages to a central syslog-ng server, all running on Unix machines.

Is there a way (whether that's in Java or in Unix) to throttle the number of messages that are sent in order to avoid an upset server also upsetting the network?

The only option I can think of is to set the log level higher so less messages will actually be sent, but that isn't ideal as a important messages could be suppressed on a machine that is otherwise behaving itself.

I suppose in an ideal world a dynamically-changing level would be good: if the number of messages/second passes a certain level, the threshold rises, but at the same time that sounds a bit like overkill.

Any ideas?

Thanks

Rich

  • The only thing I know that it's close to that is a configuration to silent messages that are the same, like, if a log message repeats like 200 times you can ask syslog to log it once and ignore the others and only log a message saying that the message repeated 199 times more. Throttling the log can make you lose log messages and that's not desirable.

    Maybe you can put a QOS/traffic shaping between your server and the log server and use that to control the speed?

    From coredump
  • I think the question is, do you care if you don't get all your logs on the central server? What you're talking about is essentially dropping messages--in which case, you'll lose logs. Is this okay? If it is, you've already answered your own question--raise the debugging level to only get messages you really care about.

    If, however, you're trying to match, say, a bandwidth constraint (such as Splunk's month processing limit), you'll need to write an intermediary server to take the logs from syslog and prioritize them. Its not difficult, but it is highly specific to your use case. One bonus with this method is that this middleman can immediately send important logs to the aggregation server and, at the end of the day/month, send the next highest priority logs that weren't sent originally. That way, you can fill the quota exactly.

    If you add more specific requirements (such as why you need to do this), and what you mean by limiting logs (duplicate lines? bandwidth? space? aggregation server can't keep up? etc..) then you'll get a much better answer.

    Good luck!

    Rich : Thanks for that - it's mainly a concern about network bandwidth.
    Redmumba : In that case, you may also want to investigate using compression, and sending your logs in chunks (just so the act of compressing it will give you more bang for your buck). :) And no problem, I'm happy to have helped!
    From Redmumba
  • If you are using log4j correctly using a higher log level should not impact the import messages as they should have a high level. Log the detail locally with local rotation.

    From BillThor
  • If it's bandwidth problem as you said, you could even use compression (It's supported on rsyslog, but i 'm not sure about syslog-ng). Also, if you are not using logs for real time alerts, you could make a script that read the local logs every some seconds and send a compressed aggregation of them.

    If you want a distributed solution then you can use the suggestion above. (Saves bandwidth on many links). But if you preffer something more centralised (With slightly more bandwidth waste, because you have to send useless logs to intermediate server) then Redmumba solution is much better.

When do I need multiple Database Availability Groups for Exchange 2010?

What is the deciding factor when we implement multiple Exchange 2010 Database Availability Groups (DAG's) versus a single DAG?

I'm using HP's Exchange 2010 sizing tool to put a budget together and it's asking me about quantity of DAG's. We're going to hire consultants to design this, but I just need a rough estimate for planning / budgeting purposes.

What is the relation between DAG count to server count?

  • I haven't voted to close this call yet, but I do believe it's borderline since it can't really be "answered" in it's current form.

    High availability is subjective and wholly dependent on your circumstances and what you deem is an acceptable level of risk.

    You haven't indicated anything about your acceptable level of risk. For example, if you want the smallest form of HA, without site resilience, then you'll need (2)two DAGs. If your WAN is a single point of failure, then you'll want (2)two, four-member DAGs in (2)two different datacenters.

    This article on Database Availability Group Design is a pretty good start.

    GregD : WTF is with the downvote?
    Chris S : Well I'll level it off at least, but I completely agree; and you put way more effort into this answer that I would have based on the effort put into the question.
    EnduroUSATour : @ChrisS - I'm new here ... I put more effort into the question w/an edit.
    From GregD
  • Performance and Availability requirements, primarily based on the number of users, how heavily they're using the servers, and how your systems can fail. More usage or points of failure = more DAGs.

    EnduroUSATour : @Downvoter - This is the bulleted list I'm looking for. (although I need more info) I'll even it out +1
    From Chris S
  • It depends what you need from the DAG. You can have several types of DAG - a "normal" DAG and a "Lagged Copy DAG", which is used more for disaster recovery.

    How many replicas you should have is (in my opinion) more of a business decision than an IT one.

    A "Normal" DAG is basically a copy of specified mailbox databases. You would have multiple replicas when you want failover to occur transparently to your end users. This allows several Exchange servers do go down (for maintenance or otherwise) and keep your mailbox database online.

    A "Lagged Copy DAG" is still a DAG which replicates your mailbox database, but in a slightly different way. You can set a lag period on a lagged DAG so the replica is effectively a copy of the main database at some point in the past (by default 14 days, IIRC). Once a transaction log file is finished (i.e it reaches 1MB and another is created) on your active database copy, it is immediately copied to all lagged replicas but is not replayed immediately. This transaction log will stay on the lagged replica until the lag period expires, at which point it is written to the lagged copy database.

    With that information, you should be able to give management an idea of what Exchange can do with regards to high availability/disaster recovery and possibly recommend a solution, but let them ultimately decide.

    From Ben

Help explaining a performance benchmark for VMware

The page below lists the results of benchmarking

http://www.vmware.com/products/vmmark/results.html

The page here

http://www.vmware.com/files/pdf/vmmark/VMmark-Dell-2009-04-21-R710.pdf has the titles "actual" and "ratio"

What do they mean?

  • The ratio is actually the consolidation ratio, and the actual score is the throughput of operations for the tiles. There are a few good links for a full explanation of the VMmark process, but its not for the faint of heart:

    Since it sounds like you're trying to gauge the performance of your own or future server, I highly recommend the second link. The VMMark FAQ also has a ton of info on how to interpret scores.

    Best of luck!

    From Redmumba

Which OS should I choose for my VPS ?

I've to install for the first time a VPS server for some Drupal-based websites and I have many options for the operating system.

In particular, I can choose between Ubuntu 32bit, Ubuntu 64bit, CentOS, and Debian.

I was wondering which one to choose considering:

  • I have only 256MB RAM, so I probably should choose a minimal OS such as CentOS
  • Should I choose 32bit or 64bit ? I thought this was a costrained choice depending on the machine. But if they ask me to choose I'm probably going to have compatibility issues with the installed software.

I want to install: - Apache server - Mysql - Drupal

thanks

  • If you want a minimal system, the best choice is Debian. About 32/64 bit, all software you need are available in 64 bit, so you don't have any advantage by 32 bit.

    wzzrd : Please explain why Debian will do better as a minimal system than the other options.
    David Spillett : A base Debian install simply includes less by default than other distro's default installs, so you have less to chop out when trying to reduce memory use in order to optimise for the constricted environment. The others can be cut down to the same, but it is a little less hassle this way around.
    lg : I agree with David, he has my same opinion.
    From lg
  • You should choose whatever you feel the most comfortable with. They all will do the trick.

    To be honest: you will be able to run apache AND MySQL with 256MB of RAM, but I wouldn't expect too much performance out of that. And you won't be able to run much else, or the server will go into swapping.

    Patrick : The point is: how many shopping carts for relatively small audience websites I can install using Drupal ? Let's say local business websites (considering only the RAM)
    Patrick : Also, isn't Ubuntu much more computationally expensive than Cent OS ?
    wolfgangsz : No, it isn't. However, with 256MB of RAM you should not even dream of using a GUI. That will surely kill your performance. CLI only. Shopping carts on drupal: You simply install the module once and then configure individual sites to use it (or not, as the case may be). If yuo are asking how many shopping carts can be in use in parallel, I don't have an answer for that. With that amount of RAM not too many, I would guess. Maybe 10 or 20. But I really can't say with any certainty.
    Patrick : ok cool, one more thing: should I go for 64bit right ? Even if I don't have a lot of RAM..
    wolfgangsz : Unless you have plans of moving beyond the 4GB memory limit in the near future, there is probably not much point to it. Having said this, other than a slight increase in program size and memory requirements, there is not much harm in it either and it's somewhat future-proof.
    From wolfgangsz
  • Do you have any experience with any of the distro's you list? If so, go with what you know already.

    If, on the other hand, this is your first foray into Linux servers, then Ubuntu or CentOS are probably better options in my opinion. Why? In my experience the documentation available for those distros was more consistantly approachable than Debian. I started out trying to learn Linux using Debian (about 15 years ago), and I went around in circles for a couple of weeks - I needed to understand x in order to understand y in order to understand z in order to understand x.

    Things may have changed but since Ubuntu and CentOS are both backed up by large businesses (CentOS being more or less the same as Red Hat Enterprise Linux) there are clear documentation paths, and books you can buy that take you through step by step. Once you've got either of these, you can delve into Debian with confidence.

    I doubt you'll get any benefit from 64bit unless you have more than 4Gb of RAM on your VPS. I wouldn't pay extra for this.

    The beauty of a VPS running Linux is that once you have it set up, you can upgrade, or migrate your config and data to a new VPS that is more appropriate. Start small and simple and work your way up.

    Patrick : So, doesn't Ubuntu require more performances than Cent OS ? I'm more oriented to Ubuntu, but I would like to know if I'm going to save resoursces with CentOS
    From dunxd
  • Have a look at Turnkey Linux, a series of ready-made virtual machines based on Ubuntu. They have a Drupal 6 appliance available. They're far lighter than a standard Linux installation, because they leave out many of the components required only when running atop physical hardware.

    From Eric3
  • As lg said, Debian would likely be slightly less hassle to make "minimal" if you start from the base install plus SSH (which is what most VPS providers give you as the starting point), though if you are already familiar with one of the other options I would go with that instead.

    Some code segments and data structures end up taking more RAM in 64-bit code than 32-bit which might make a difference, though I expect that difference to be small. You are given the choice because 64-bit CPUs based on the amd64 instruction set (i.e. current AMD and Intel offerings, with the exception or Itanium and related chips if those are still generally available) can happily run both 32-bit and 64-bit code together with minimal overhead so in most virtualisation systems a host with 64-bit CPUs can happily run VMs with a mix of 32- and 64-bit OSs at the same time.

    In only 256Mb RAM you will probably need to tweak the Apache and mySQL config for efficient operation, though there are plenty of decent guides for that out there. Tuning mySQL (and Apache) will make much more difference than what Linux variant you go for.

    To reduce your compatibility issue concerns just keep to the packages provided by your chosen OS, this will also save you time and hassle when security updates are released.

    If you are not particularly familiar with any of those Linux variants I strongly suggest that you "play" with a local server first before trying to setup a public server. Install vbox (or vmware or similar) and create a few 256Mb VMs, one for each OS you want to try, and give them all a look. A local server in a VM like this will mean you can play to learn the ropes without worrying about being charged admin fees to rebuild the VPS if you break something significant, and such a VM will also provide you with a useful testing environment for when you are planning changes to your public server and the services it runs further down the line.

EC2 Ubuntu Lucid Hostname/ServerName

I'm trying to move my server from linode to EC2 and am following the guide located here - http://library.linode.com/getting-started/

One problem I have is when I set my hostname to my public DNS:

echo "ec2-46-51-***-**" > /etc/hostname

And add to my elastic ip and public DNS to /etc/hosts:

46.51.***.*** c2-46-51-131-72.eu-west-1.compute.amazonaws.com c2-46-51-131-72

I can no longer log on. Should I be using internal IP and private dns names?

Thanks

  • The internal and private IPs/DNs are only for use from within the Amazon network (i.e., from other EC2 servers). The elastic IP only allows you to bind a static IP to a chosen VM. You should probably be doing one of two things:

    1. Connect to the public IP that's generated whenever you launch a new instance, or
    2. Bind the elastic IP and use that.

    Neither should require you modifying the network setup on the VM.

    Good luck!

    bradleyg : Thanks. So for a web server I don't need to change my hosts file at all? Or change my hostname?
    Redmumba : You shouldn't, no, but try it without touching the system files first to confirm. When tweaking configs, you should always start with the defaults, and then work from there.
    From Redmumba

Switching between Office 2003 and 2007 applications requires reconfiguring (installation dialog)

Hi

In our class environment, we have one class used primarily for Microsoft Access courses where each time you switch between 2003 and 2007 it will start reconfiguring with the installation dialog I'm sure you've seen before. It recently started happening in a second class and we are completely oblivious as to what triggered it. Of course external teachers use the classroom so it's hard to track any changes that might have happened. It happens for Access and Word, not for Excel or Powerpoint.

Both have the most recent updates (SP3 and SP2 for 2003 and 2007 respectively).

The users have no admin rights, but when we log in with local admin the same thing happens and keeps happening even after this "reconfiguration".

Another issue in these clasrooms was posted here: http://serverfault.com/questions/179520/opening-my-documents-prompts-for-credentials-in-redirected-environment I do not think both are related, but you might disagree. Feel free to let me know.

How reliable is Exchange 2010 when using JBOD with a DAG? Is anyone using a JBOD/DAG?

There are many ways to layout your E10 disks... is anyone using JBOD for Exchange 2010 within a DAG?

Is the failure of a node completely transparent to the end user?
Are transactions ACID compliant? .. in other words will an in-flight transaction be repeated/resumed on the failover node?

EDIT:
I'm aware RAID could be used within a JBOD, but some people here may not know that Microsoft has a proposed raidless-JBOD architecture with Exchange 2010 for the mailbox role. In the event an array or node fails, the CAS server will failover to a different server hosting a replica of the JBOD data.

I'm only interested in answers that take into account JBOD with the new DAG concept, does it work in the real world, and is anyone doing it.

  • For which part of Exchange 2010? If you put your main database on a JBOD drive, and any one of the underlying real drives fails, the database is gone (i.e. you will need to recover it from your backups), since there is no redundancy in JBOD. For the main database you really need a RAID level with redundancy (i.e. RAID1 or higher).

    However, a JBOD device can be useful for temp files.

    EnduroUSATour : @wolfgansz - Microsoft is recommending JBOD's within a DAG for Exchange 2010.. I'll edit my question so that the DAG usage is emphasized
    wolfgangsz : Ah, that DOES make a difference. When using a DAG, that effectively introduces the redundancy which otherwise is provided by the RAID array. Using a JBOD in a DAG will probably work. To answer your question as such: we use RAID10 arrays in our DAGs.
    From wolfgangsz
  • ONLY ever consider using JBOD if you are going to have multiple Exchange Servers. As has already been pointed out, if one of those disks dies, so does your Exchange server.

    You should set up DAGs on your mailbox servers to host multiple copies of your databases, so if one mailbox server goes down your mailbox databases are not lost and another mailbox server will take over as the active copy.

    You'll also need multiple client access servers to ensure your users can still get their mail. If you do this, be sure to set up a CAS Array so if a client access server goes down, your users are automatically redirected to another Client Access server.

    Multiple hub servers will also required, but there shouldn't be much configuration involved in setting those up since Exchange should just find one.

    Provided that you have DAGs and CAS Arrays set up, it is all transparent to end users. They might get brief Connection to Microsoft Exchange server has been lost and Connection to Microsoft Exchange server has been restored messages, but that should be all over very quickly.

    With regards to "is Exchange ACID compliant", the answer is yes. Exchange uses a write-ahead transaction log, so transactions are guaranteed. If Exchange goes down in the middle of a transaction, it will attempt recovery and replay the transaction when it is started back up. If this fails, the transaction is discarded.

    EnduroUSATour : @Ben I'd +1 your answer but I don't have enough rep to do so.
    From Ben

how to use iis 6.0 redirects?

I'm attempting to use the below reference to create a re-direct for my local site with no luck.

http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/6b855a7a-0884-4508-ba95-079f38c77017.mspx?mfr=true

I want absolute links on my local site that point online to point to my local site instead.

example absolute link [http://online.com/products]

when I click the local version I'd like it to redirect to: [http://offline/products]

I want to preserve everything after the domain name and append it to the server (local) name so that when I click a link it will redirect to the local site and not the online version.

I've tried [http://offline$S] but that doesn't append the "suffix" /products the way I thought it should.

What's going on here?

  • An IIS redirect isn't going to work for you in this case. For those absolute links your offline IIS server is completely out of the picture. If you look in your IIS logs on offline you should see that none of those requests even make it to that server. Your browser will attempt to directly contact online.come itself and send the request there (by way of any proxy servers in the chain). IIS on offline never gets a chance to handle the request and therefore will not fire any redirect. For an IIS redirect to work the initial request must be served by your IIS server.

    To make this work you would need to get creative with dns or internal proxy servers.

    payling : I have little to no experience configuring dns or internal proxy servers. Do you think those are my only solutions other than converting links to relative?
    squillman : @payling: Yes, I do. With the added possibility of using host file entries, but that's a similar issue to creating a dns solution. The issue that you're facing is a name resolution thing which are solved by dns / hosts file entry, or by using a proxy server that knows how to forward the HTTP requests appropriately for your network environment.
    payling : I'm not sure if this helps, but the reason why I'm looking to redirect is because I created my site using SSI. The menus, footers, headers etc are all includes and must have absolute links otherwise the links will redirect to the wrong location once the user has traveled to a page in a sub directory.
    squillman : @payling no, not really. An absolute link in the markup that a client uses is just that regardless of how it's generated. The client does not interpret them differently (and has no way of distinguishing differences).
    payling : I'm currently considering an option to use SSI conditional statements to determine which url to load. The live site will always use port 80 and the local site is on 81. I could create an ssl statement that says if port 81 load local url else load absolute url.
    payling : Just realized that won't work, iis6 does not support SSI conditional statements...
    From squillman
  • Is the computer hosting the 'local' site on the same network as the IIS?

    Do you need the option to access both online and offline versions?

    payling : Yes and yes. The main site (online) is hosted off site. I setup iis6 on our internal server for development purposes so I could test it before uploading it to the live site. So I can't redirect all urls for our company to go to the local website, it's meant for just the web developers here.
    Kyle Buser : You can accomplish this in a make shift manner by assigning an IP address to the domain name in the hosts file of the computers that want to point to the offline version. On windows machines this file is generally located@ C:\Windows\System32\drivers\etc\hosts if you add 127.0.0.1 onlineurl.com (replace 127.0.0.1 with whatever the internal site IP address is.) you can do it this way. On Linux it's /etc/hosts
    From Kyle Buser
  • Is this just for your developers? I'm still a bit confused on what you're attempting to do. However you can do a few things, some already suggested.

    1. Use Host Files if you can't change DNS which will change resolution for entire company
    2. But if your developers are online ones going to offline.com - create a new local/internal DNS record (host record) for offline.com
    3. Use your redirects on offline.com for the pages where you want the links to go to live content on online.com (basically reverse of what you were thinking - since you do have control of offline.com)

    Also, for #3, instead of using iis redirects, try ISAPI rewrite tools. Since its dev, i'm sure you don't want to pay for one - (one i use is Helicon - isapirewrite .com [won't let me post more hyperlinks] $99 /server) but there are free ones - IIRF http://iirf.codeplex.com/ might be a good choice.

    payling : I'll have to look into ISAPI rewrite tools, sounds promising!
    From nurealm