Wednesday, January 26, 2011

How to get permission to create full-text index?

I tried to create a full-text index on my new full-text catalog and got this error:

Msg 9967, Level 16, State 1, Line 1
A default full-text catalog does not exist in database 'foo' or user does not have permission to perform this action.

FYI--

  • I connected to the target sql server with Windows Authentication
  • Full-text indexing appears installed (right-clicking the table, I see the Full-text Index -> option)
  • I verified that my full-text catalog was created
  • This is my first time setting up a full-text catalog and index

What do I need to do in Sql Server 2005 and/or in Windows Server 2003 to get permissions?

Please be thorough (assume I am a sysadmin n00b). Thank you.

  • If you have the database-owner permission, that's all you need to do there.

    Full-text indexing is an option in the SQL Server installer. You may need to go back and add the feature.

    Other than that, these instructions should take care of you:

    http://sqlserverpedia.com/wiki/FTS_-_How_to_use_TSQL_to_Create_Full-Text_Indexes

    Bill Paetzke : Thanks @Miles. Your link showed me my fault. The error was with the "default catalog" part--not the permissions. Once I added `ON MyCatalog` to the SQL statement, it worked.

IIS7 Session ID rotating with Classic ASP

I am trying to migrate a Classic ASP app onto a Windows 2008 R2 server.

The application features run fine, but I am having issue with session.

The application keeps the logged in user information in session and I am constantly getting knocked out as if the session had expired.

While debugging I have discovered the sessions are not expiring but instead I am getting 2-3 different Session IDs in use by one browser.

I am outputting Response.Write(Session.SessionID) on various pages in the application and I can sit there and hit refresh over and over and watch the number changed between these 2-3 SessionIDs randomly.

The sessions are still valid because when I refresh and get the Session ID that I logged in under the page is displayed (because the security check was successful) and when I get one of the other Session IDs I get the "you aren't logged in, you need to log in" message.

If I close and re-open the browser, same story just the set of IDs are new.

This happens with IE8, Firefox and Chrome from multiple computers.

Things I've tried:
- AppPool set to No Managed Code and Classic
- Output Caching set .asp to never cache
- ASP Session Properties enabled and disabled asp session state and confirmed it affected page (error trying to read Session.SessionID when disabled)

Things I've tried just in case but shouldn't have anything to do with ASP Session:
- Disabled compression
- Changed ASP.Net Session State properties (InProc, StateServer, SQLServer, Cookies, URI, etc) -

  • Check the web garden settings for the app pool. If it's greater than 1, the site will run in more than 1 worker process. Session state in Classic ASP depends on a single instance of the in process memory. It's very rare that the web garden needs to be set to anything other than 1.

    A webfarm with round-robin load balancing would have the same issue but your post doesn't suggest that you have that.

    ManiacZX : You are correct sir! As soon as I looked at the setting and saw it was set to 3 I knew you were right and a lot of words I can't type here went through my head. I never touched that so it must be IIS default.
    MaseBase : Scott-- Thanks for this info! Does this mean that Classic ASP can never run in a web garden on IIS 7? I had an issue with this and had the Session State set to use the State Service, but I was getting the behavior of an in-process state storage. In a high traffic site, it seems Classic ASP could benefit from a web garden too.
    Scott Forsyth - MVP : MaseBase, if you are using session state (not everyone does), then a web garden with Classic ASP will cause the issues described. You mentioned the State Service, but that's only for ASP.NET, not classic ASP. The Session State Service isn't affected by web gardens. As long as you are Session State Service with ASP.NET, then a web garden is fine. It's also worth noting that there is almost never a reason to use a web garden. The only situation I know of where it may help is with long running pages that aren't CPU bound where you want extra threads running in parallel.

Reverse DNS Lookup from the Command Line

I'd like to get a list of all domains pointed to a certain IP address. Is there a way to get this information from the command line?

Nothing like "host", "nslookup" or "dig -x". Those return the hostname of the IP address which, while helpful, is only part of what I want returned.

Edit for more information: An example of a website that returns this information is http://www.domaintools.com/reverse-ip/?hostname=74.125.47.104

  • There isn't any way to get this information at all, because there isn't a centralized authoritative repository for this information. Anyone that owns a domain name can create an A record or CNAME that points to a given IP address. The owner of the DNS records doesn't necessarily have to have any control over the IP addresses.

    For example, Microsoft could create a series of A records named google.microsoft.com that pointed at the public IP addresses for google.com. Other than already knowing it exists, there's no easy way to take Google's IP addresses and find out that google.microsoft.com exists.

    nowthatsamatt : I've edited the question to show an example of what I'm looking for: http://www.domaintools.com/reverse-ip/?hostname=74.125.47.104
    afrazier : My answer still stands. domaintools.com must be building their own database of this information via some other method.
    nowthatsamatt : That makes sense. I guess I could just write a script to google for an ip address and take the results and verify them by pinging the domain for an incomplete list. Looks like that's all I can do, yes?
    afrazier : That or see if DomainTools has an API for querying their database. FWIW, it looks like DomainTools is doing it by building a DB based on A records of the domains themselves, so it's pretty incomplete. For an IP address that I control, it said there were 12 domains. It's correct in that there are 12 domains with A records for the domain name pointing to the IP, but incorrect in that it missed subdomains and domains forwarded by GoDaddy.
    Jim B : afrazier is correct- the tool you are pointing to is relatively worthless for the reasons specified. You also have to remember that there is nothing stopping me from registering mysuperspecialsearch.net and pointing it to google and the only way you will know what I am pointing to is when you query my DNS records.
    From afrazier

Smart Auto-completion in SVN (and other programs!)

When I type "svn add path/to/somefile..." and tab to autocomplete, the system should only complete files/directories that are not currently under SVN control. Likewise, when I commit, remove or resolve files, the tab completion should only complete files that are relevant to what I'm doing (i.e., modified, currently in SVN or conflicted). This is especially important in SVN where every single file name you type could potentially benefit from smart autocompletion, but it of applies to other programs.

I know bash has a bash_completion file that can be used to programatically alter this behaviour but I've not found a decent example of SVN completion which actually completes file names rather than SVN command names.

My question is: Does anyone have such a setup? Does anyone use a different shell or tool that does something similar? Has anyone given this any thought?

  • Take a look at the completion script found here. It may approach doing what you want.

    An excerpt looks promising:

        # 'files' is set according to the current subcommand
        case $cmd in
            st*) # status completion must include all files
            files=$cur*
            ;;
            ci|commit|revert|di*) # anything edited
            files=$($status $cs| _svn_grcut '@([MADR!]*| M*|_M*)')
            ;;
            add) # unknown files
            files=$($status $cs| _svn_grcut '\?*')
            ;;
    
    Jimmy : This seems to do it! Note that you must specifically enable file autocompletion in this script - search for the word 'svnstatus' for the relevant setting. I just cant believe people are insane enough not to demand a feature like this from day one. How on earth do people use svn without it?

SharePoint 2010 Licensing Costs

We will be implementing a public-facing website in SharePoint 2010 and I have a few questions regarding licensing:

  1. Is there any (relatively) reliable pricing information available for SharePoint 2010? What about rumors?
  2. What edition of SharePoint 2010 would be appropriate for a publicly facing website (in 2007, you needed Enterprise for this, but it seems that WCM functionality is included in Standard in 2010)?
  3. What would be a reasonable number to budget for SharePoint 2010 licensing for a publicly facing website?

Note: I have tried asking Microsoft directly. Unless you are a volume license customer, they direct you to a reseller (like CDW). Unfortunately, none of the resellers have the pricing for 2010 yet. The sku isn't even in their system.

I was able to get in touch with the Microsoft Pre-Sales team and they confirmed that the price list will for 2010 will be published on May 3rd (or thereabouts), but they weren't able to give me a price.

Thanks in advance for your help!

  • you could always buy sharepoint 2007 w/ SA and not worry about 2010 pricing

    Franklin : Yes, this would be one option. The problem is that I've "heard" that the internet option in 2010 will be about half the cost of the 2007 equivalent. Until we know something conclusive though, using the 2007 pricing is probably best for budgeting purposes...
    From Jim B
  • http://www.microsoftvolumelicensing.com/ProductPage.aspx?pid=320

    check that out...if you understand your points, it may make sense.

    From Brian
  • Government contracts are a great way to sanity check this type of information, if you understand how the product is sold.

    According to the New York State Microsoft Select contract (see the "price list" link):

    • SharePoint for Internet Sites Standard is $9,257 list and $7,389 for the state
    • SharePoint for Internet Sites Enterprise is $32,490 list, and $25,936 for the state

    Note that state governments get MS Select Level D pricing as a ceiling price.

    You can find these items on page 10 of the price list.

Barebones network appliance, 4+ GbE NICs, Intel chipset

Looking for a stepped-up ALIX or Soekris embedded network appliance to load pfSense and/or handle other FOSS-based network roles.

Main criteria is a GbE NICs (will be used for core routing/firewalling with managed GbE switches), DDR3 RAM capable, and multi-core/Intel Atom processor, in a 1U rack-mountable case or smaller.

Axiomtek has the ideal product but I don't think they have retail channels.

  • You can build your own around this Supermicro chassis. But you would need to use the 4 port Intel NIC to accomplish your goal.

    gravyface : i think this is the best route: build my own. While I've uncovered a pile of pre-built options, there's always a component that's lacking or I don't need/want.

BackupExec Hyper-V availability (within BackupExec 2010 or seperate?)

Hi,

I have downloaded the trial of Symantec BackupExec 2010 but I am a little confused: The agent for Hyper-V is available and for sale ($1800 or something), but the trial of BackupExec tells me i can install the agents I need from the installer.

Can I install a full version of BackupExec, providing I have the license, but also install a full version of the Hyper-V agent? Or do I have to download/buy this seperately? Is a trial version available for the agents?

As is probably obvious from this thread, I am a bit confused about the business model of this product. Please clarify.

Thanks

  • During the trial period you can install and use all of the options and agents. Once the trial period ends then any components you haven't added a license for will cease to function.

    Chris Thorpe : Yep and to clarify - When it comes to the actual purchase you need to buy the Backup Exec product itself PLUS the separate Hyper-V Agents for your hosts. You then add the license numbers into the backup exec console, and the corresponding number of agent installations are enabled.
    From hmallett

Which server would you purchase? IBM x3550 or Dell R610?

I'm in the market for a single unit rack mounted server with a strong upgrade pathway.

The two servers on the top of my wish list are:

IBM x3550 M2 Express

Followed by

Dell R610

Ultimately I want to have a Dual Quad Xeon (2 Ghz+) server with loads of RAM for a top notch DB server. The database is likely to keep growing indefinitly so a snappy Raid 5 array of Harddrives will be essential.

Which would you purchase?

  • Whatever matches the servers you already have in order to make support simpler?

    All our servers are from Dell or Apple so I'd get the R610... all our support team are familiar with how Dell servers work and as I say above minimising the amount of different vendors kit in your server rooms makes support easier.

    From a quick glance, the IBM server supports more RAM, if that is likely to ever become a factor.

    Harry : This is my first server
    Harry : Yeah i'm liking the 16 slots (versus 12). If only 8GB sticks were cheaper! Waiting to hear back from IBM for a quote. If i'm reading their information correctly the base spec is much cheaper than Dell
    dyasny : If the spec is base then go for IBM. But if the server is going to grow, you need to count more than just the base, you'll have to take future upgrades into consideration. If the server is going to be remote, you'll definitely require at least an IMM/DRAC card
    Robert Moir : @ dyasny, I think all new Dell servers include the DRAC as part of the base spec. If it's an extra on the IBM then this is something to consider. @ Harry - the extra slots are only important if they will be used, of course. I've seen a lot of places buy servers based on "internet expansion potential" and never ever open the case door until the server is scrapped.
    Harry : i'm starting to downsize my requirements. 8 dimm slots maybe more than enough elbow room
  • If this server is going to be hosted, you need to check if the server you're looking to buy comes with the rails that will support the DC's racks. Also, check the total cost, including the pricing. Dells come with a 4hr warranty - something not to be neglected in mission critical environments, IBM's might have something similar as well. If you are well familiar with either brand and it's specifics (like Dell OMSA and ITAssistant for instance), then I'd suggest you take it, if everything else is not a determining factor.

    From dyasny
  • "Snappy R5"...hmmm, planning on doing a fair amount of writes? if so why not do the right thing and go with R10.

    Also consider;

    IBM x3550 M3 - quad-core 55xx-series Xeons, 6 disk slots, 16 memory slots (which is an odd number for a QPI-equipped box by the way)

    Dell R610 - quad-core 55xx-series Xeons, 6 disk slots, 12 memory slots

    or

    HP DL360 - six-core 56xx-series Xeons, 8 disk slots, 18 memory slots

    Obviously Dell will be cheaper for most but I'd rather you have the specs to consider.

    Harry : Thanks for the advise re R10. I'm totally noob when it comes to Raids so will be doing a fair bit of research on the type and setup before hand
    From Chopper3
  • The X3550 M2 is no longer available. You would be looking at the X3550 M3 which has the option of the 5600 xeon and expanded memory capability. Here's a link to the system.

    http://www-03.ibm.com/systems/x/hardware/rack/x3550m3/index.html

    Harry : I'm still waiting to get a response from one of the IBM resellers. IBM called and have passed on my details to one of them.
    From Steven

How do I list currently running shell scripts?

I think I have a shell script (launched by root's crontab) that's stuck in a loop. How do I list running scripts and how can I kill them?

I'm running Ubuntu 9.04, but I imagine it's similar for all *nix systems...

  • ps -ef will show you list of the currently running processes. Last field is the process name and parameters. Find the process you are looking for, and look at the 2nd column. 2nd column is process id or pid.

    Then do kill -9 <pid> to kill that particular process.

    Ignacio Vazquez-Abrams : `kill -9`? You're new, aren't you.
    solefald : heh... i am more old school than new...
    Nick : So what's the 9 for? It seems to work without it....
    Ignacio Vazquez-Abrams : @Nick: Normally kill sends a `SIGTERM` to the process, allowing it to shut down appropriately. Adding `-9` sends a `SIGKILL` instead, causing it to shut down forcibly without any chance of cleanup. See the `signal(7)` man page for some more details.
    solefald : It's an equivalent of "force". Meaning nothing could block the kill command.
    Ignacio Vazquez-Abrams : Except being in the middle of kernel code. Not even `SIGKILL` can interrupt that.
    Nick : That's what the power buttons for... ;)
    Dennis Williamson : Some references: [When should I use kill -9?](http://aplawrence.com/SCOFAQ/FAQ_scotec6killminus9.html), [kill -9](http://speculation.org/garrick/kill-9.html), [Useless use of kill -9](http://sial.org/howto/shell/kill-9/)
    From solefald
  • ps auxfwww will give you an ASCII art tree diagram of all the processes running on the system. From there it's just a matter of tracing down from the cron daemon and running kill against the appropriate PID.

    Nick : Thanks- this way was easier in this case since I could follow the tree down from the cron daemon.
  • If you want a more stripped down version with better ASCII art (in my opinion I suppose) you can do

    pstree -p
    
    Nick : Cool, Thank you!
    From
  • Or just good old top command, which will show a toplist of most resource-hungry processes.

    From Johan

Creating sphinx table in mysql crashes mysql - why?

I've got the latest version of sphinx installed. I have created the index with no problems and searchd starts up with no problems.

However, whenever I try to create a test table (straight from the docs I might add) mysql crashes.

I'm at wits end here.

Any ideas are appreciated.

G-Man

Here's the query:

CREATE TABLE t1 ( id INTEGER UNSIGNED NOT NULL, weight INTEGER NOT NULL, query VARCHAR(3072) NOT NULL, group_id INTEGER, INDEX(query) ) ENGINE=SPHINX CONNECTION="sphinx://localhost:9312/test";

  • I'm finding this rather confusing and a bit of a look over the Shpinx web site doesn't make things much clearer but, as I understand it, Sphinx adds functionality to MySQL datbases, supporting MyISAM and InnoDB. The creation query tells MySQL to create a table using the sphinx engine, which is not something MySQL normally understands. I therefore suspect that the query is either incorrect or incomplete. Alternatively, some component of Shpinx that should allow MySQL to recognise the new sphinx engine isn't working as it should.

AD logon script, how to...

I'm a student, I have this assignment where I need to know how to disable the user from changing the background to a client computer, thing is that I've been looking around to know what language does the logon script use, any site with handy information, tried googling but I really can't find anything useful, don't know if I'm googling the right terms

All I've found for now is a lot of tutorials about mapping network drives and so on

  • Login-scripts will use anything the client machine considers a valid script. .BAT and .CMD files are understood by everything, but are significantly limited in what they can do; they can do simple drive mappings and a few other operations but little else. Almost everything also can run .VBS scripts which allows a much more robust script. If you're lucky enough to have a pure Win7 environment, it is very possible to use PowerShell scripts. And finally, if you're really gung-ho about it, you can actually compile your own .EXE files that will do everything you need to do and have it be your login script. The thing to keep in mind is that the login script is, I believe, executed in the User's context so it can only do what the user is allowed to do.

    Think of a login script as a file that the Group Policy engine gives to the local machine to run after a start command.

    start login.vbs
    start login.bat
    start login.ps1
    start login.exe
    

    That's not exactly how it works, but it does frame the concept better.

    Also, the machine itself can have startup-scripts! These run before user login, and run in SYSTEM context. Can be handy for certain tasks.

  • A logon script is not the appropriate place for what you are trying to accomplish. Group Policy is what you want to do as previously noted. Alternatively you could set a mandatory profile for the user(s).

    From sinping
  • You don't want to use a logon script for this. You want to use group policy. You'll need to read up on Group Policy, there's plenty of resources to learn with (like GPOGuy.com), but here's the specifics on how to absolutely prevent the wallpaper from being changed.

    Microsoft KB Article: You can change the desktop wallpaper setting after administrator selects "Prevent Changing Wallpaper" option in Group Policy (327998)

  • Here are the exact instructions modified for a domain from @K. Brian Kelley's link

    Open up the group policy object editor and point it to your domain group policy

    Under the Domains Policy, expand User Configuration, expand Administrative Templates, expand Desktop, and then click Active Desktop.

    Double-click Active Desktop Wallpaper.

    On the Setting tab, click Enabled, type the path to the desktop wallpaper that you want to use, and then click OK.

    Hope that helps.

    K. Brian Kelley : Right, except you might not want it to be your default domain policy. That's why I gave the other link to GPOGuy.com. You may only want to customize this for particular users.
    From Campo

back-end SQL server 2005 databases for website

Hi,

We're migrating an existing IIS website + MS SQL 2005 database (on the same server) to a new test set-up. The existing set-up is too slow.

I want one ISS server and 2 X MS SQL server 2005. One live DB server for the website queries (inserts, updates) and another for backups, reports or stored procedures. So the live DB should be more aimed at performance. The other doesn't even need to be synced instantly. What is the best way in SQL server 2005 to set this up. Can somebody point me in the right direction and give me some pointers.

Thanks

  • There are several options that come immediately to mind.

    • Snapshot replication
    • Transactional replication
    • SSIS job to ETL data
    • T-SQL through a linked server connection

    How much total data? How much data is changing and how often? How soon does the changed data need to appear on the reporting system? Those are some questions to ask to determine what option is best.

    Datapimp23 : Let's say we want the data from the live db transferred to the idle one every hour. The data itself in total is around 2 GB but the transaction log is around 25 GB.
    K. Brian Kelley : Size of the transaction log could be large for a lot of reasons. See Paul Randal's blog posts on database recovery at sqlskills.com for more information.
    BradC : the trans log shouldn't be that big compared to your data. What recovery mode are you in? (Full or Simple) Are you doing regular transaction log backups?
  • One live DB server for the website queries (inserts, updates) and another for backups, reports or stored procedures

    Ok, you'll definitely need to do regular backups on both databases. And your web database could potentially use stored procs for inserts/updates, too (depending on how you've designed your app).

    Log shipping would probably be the easiest to set up and maintain (take transaction log backups on the primary DB, then restore them to the "reporting" db)

    djangofan : i agree but there is a learning curve on that if you havent done it before.
    BradC : sure, but I would argue that log shipping is FAR easier to learn than snapshot or transactional replication. Also far simpler than any custom logic to copy data.
    SqlACID : +1 couldn't agree more.
    From BradC
  • If your front end is a single web server, I would tend to question the benefit of having two database servers.

    That being said, database mirroring is another technology available to you in SQL 2005 SP1 that would solve this. It does require that your database is in the full recovery model, though. You can also use it to get some automated redundancy in the event that your primary fails.

    From JohnW

Why does PSEXEC work if I don't specify a password?

When I run SysInternals PSEXEC to launch a process on a remote machine, if I specify the password in the command line it fails with:

PsExec could not start cmd.exe on web1928:
Logon failure: unknown user name or bad password.

psexec \\web1928 -u remoteexec -p mypassword "cmd.exe"

or

psexec \\web1928 -u web1928\remoteexec -p mypassword "cmd.exe"

If I just specify:

psexec \\web1928 -u remoteexec "cmd.exe"

and type in the password it works just fine.

The originating server is Windows 2003 and the remote server is Windows 2008 SP2. The remoteexec account only exists on the remote server and is a member of the Administrators group.

  • If you don't provide a username your current authentication is passed through. When passing the -u paramater you may need to specify the username as DOMAIN\username. I am going to guess that psexec is trying to authenticate as the local account 'remoteexec' on the computer instead of a domain account like you expected.

    Kev : Thanks for the reply. I did try specifying `WEB1928\remoteexec` as the user, but sadly no joy.
    Kev : @zoredach - john's comment under the question nailed it. But thanks fort the suggestion.
    From Zoredache
  • Could it be that the password contains characters that need to the password to be in quotes?

Mounting windows shares with Active Directory permissions

I've managed to get my Ubuntu (server 10.04 beta 2) box to accept logins from users with Active Directory credentials, now I'd like those users to access their permissible windows shares on a W2003 R2 server.

The Windows share ("\srv\Users\") has subdirectories named according to the domain account users and permissions are set accordingly. I would like to preserve these permissions, but don't know how to go about it.

  • Would I mount as an AD administrator or have each user mount with there own AD credentials?
  • How do determine between using mount.smbfs or mount.cifs?
  • One option would be to setu pam_mount. It allows you to mount shares when a user logs into the system. With pam_mount the folder will be mounted with the users credentials. They credentials do not need to be saved anywhere, they will automatically be passed through by pam from what they used to login.

    Jamie : This looks really promising ... thanks.
    Jamie : I'm holding out on checking the answer, only because I've not quite got it working yet.
    Jamie : From another question; on the same Ubuntu system `sudo apt-get install libpam-mount`
    From Zoredache

Why is wp-cron taking up so many resources?

From /var/logs/httpd/error-log:

[Thu Apr 22 01:41:15 2010] [notice] mod_fcgid: call /var/www/vhosts/mydomain.com/httpdocs/wp-cron.php with wrapper /usr/bin/php-cgi  
[Thu Apr 22 01:41:15 2010] [notice] mod_fcgid: server /var/www/vhosts/mydomain.com/httpdocs/wp-cron.php(17999) started  
...The previous line shows up 8661 times...

What's in Cron?

Apr 22, 2010 @ 18:25 (1271960731)    Twice Daily    wp_version_check  
Apr 22, 2010 @ 18:25 (1271960731)    Twice Daily    wp_update_plugins  
Apr 22, 2010 @ 18:25 (1271960731)    Twice Daily    wp_update_themes  
Apr 23, 2010 @ 12:21 (1272025294)    Once Daily wp_scheduled_delete  

Running CentoOS 5/plesk 9.3/php as FastCGI/suExec with WP 2.9.2

Thanks in advance.

  • is the request to wp-cron.php coming from the local host, or somewhere else? if the former, it looks like WordPress’s timing is doing something wrong (see spawn_cron() in wp-includes/cron.php), if the latter — disable access to it via .htaccess or similar.

    Gaia : it is coming from the local host. i guess the next step is find out the sql script to add the commentmeta table and see if the problem goes away. thanks
    From Mo

Is there a way to use individual DSA key-pairs for Apache (WebDAV) Authentication?

I'm basically looking for a way to allow for secure, but password-less authentication to SVN through WebDAV (I would rather not use svn+ssh.) I know this is possible with SSH, is it possible with Apache Authentication too?

  • Something like this? The page looks pretty old though...

    Nate Wagar : That looks like it's just HTTPS where the client already has the certificate. I'm looking for something to identify and authenticate each individual user.
    solefald : I don't think that would be possible. SSL cert auth is the only way that i know of.
    Zoredache : @Nate Wagar, by following the above you would issue a unique key and certificate each user. Though I would suggest you use something like TinyCA (http://tinyca.sm-zone.net/) instead of creating all the certs manually.
    Zoredache : See also: http://httpd.apache.org/docs/2.0/ssl/ssl_howto.html#accesscontrol
    From solefald

Best linux distribution for Java build server & ...

Hi all, we are trying to setup a build server for building our Java projects. Following software will be installed: * Subversion * Jira/Confluence/Crucible/Fisheye ... * Bamboo (continuous integration solution)

I have 2 questions: 1. Which dist of linux is better suited in your opinion? Our current candidates are: openSUSE, CentOS, Gentoo, Mandriva. 2. Is it possible to build something like an image after finishing setup process and burn it on hard drive for next customers without need to repeat all installation and config process?

Thanks in advance,

  • It shouldn't really matter, but Java and Redhat go together quite frequently, and CentOS is a clone of RedHat.

    I wouldn't recommend gentoo for this personally, it is generally considered the most complex of those distributions.

    Cloning is quite possible. If they are all identical machines, you might just want to use dd to clone an image of the hard drive.

    Trevor Harrison : +1 re: gentoo: You are interested in continuous integration of your software, not gentoo's software, right? So yeah, avoid gentoo and the endless recompiling it forces you to do.
    Trevor Harrison : @Kyle Brant: right now you have 666 answers in your profile. Nice.
    Kyle Brandt : @Trevor Harrison: Nice, it was http://serverfault.com/questions/134864/maximum-speed-of-data-transmission/134873#134873 that was my 666th I think :-P

IIS Hangs on SQL Connections when running ASP.net applications

We have a database server running SQL 2000 and two web servers hosting ASP.net applications. All three servers are running Windows Server 2003 SP2.

Our issue is repeatable after about 2 weeks, IIS on one web server is no longer able to establish SQL connections. Static content loads fine. Other non-IIS applications are still able to contact the SQL database server. ODBC functionality also still works.

While running SQL profiler a connection is never established from IIS when it is in this state.

The only way to fix this situation is to restart the web server.

There are no firewalls installed on any machines.

  • If you load perfmon, and look at .NET CLR data - there are several performance counters that you can load to see the # of pools, # of pooled connections, failed connections, failed commands.

    Consider that the connection pool defaults to 100 connections in the pool per process (per appdomain) by default. Is it possible that you're looking at pool exhaustion?

    PaulWaldman : Thanks for pointing me in this direction. Before seeing your response, I stopped and started the app pool in question and the issue was resolved. The next time this issue occurs I will look at the counters. The apps hosted on this server are very low usage. The server generally only has 2 simultaneous users. Do you think the application pool could be exhausted with such low usage?
    JohnW : Its possible - 2 weeks for this to occur with only 2 users may be indicative of something wrong with the app. I'd set up perfmon to log these counters every hour or so, and you should be able to form a trend within a few days of use.
    PaulWaldman : Thanks John. After restarting the app pool the number of pool connections within the .Net SQL client was 9. Two days later, and it is up to 54. This does not seem like normal behavior to me.
    PaulWaldman : I've inherited these web apps about 2 years ago. No code changes have been made within the past year. This issue started to take place about 2 month ago. Because both of these apps are quite large, one is completely dynamic based on SQL queries, my guess is there is a function of one of the apps that they have only started to use when this issue started occurring.
    JohnW : One thing to note is that by default, IIS 6 recycles app pools every 29 hours (1740 minutes). So, unless that has been disabled/overriden for the app pool, you may be having this issue sooner than you may think. I'm not sure if this is something you could recycle every 4 hours due to the impact on user sessions, but that may buy you some time.
    From JohnW

32bit vs 64bit guest VM and RAM usage

Why does a 32bit domU (Xen guest VM) use less RAM than a 64bit?

Notes: The same software complied for a different arch(AMD64 vs. 686). Obviously this is Linux or BSD or something easily ported. Maybe this is also a good one for SO.

I've read this is so. I can guess why, but I'd like to hear everyone's comments.

  • Under the same workload, a 32-bit system will always use less memory than a 64-bit one, mainly due to two reasons: the bigger size of executables, pointers, variables etc., and the additional kernel overhead of managing a bigger address space.

    This of course doesn't happen only to virtual machines, but to physical systems too.

    sims : Yeah, address space... Is this the reason why executables are larger as well?
    Massimo : No, they're larger because they are compiled using 64-bit pointers and (usually) variables, instead of 32-bit ones.
    sims : No? What do you mean? A 64bit machine has a larger memory address space than 32-bit. Would not that make those pointers larger? You actually have to store a larger address. At least that was my guess.
    Massimo : Well, we could say yes and no at the same time :-) The executables are larger not *directly* because of the bigger address space... but because they're compiled using larger pointers; in order to address it, of course.
    sims : OK, good, I'm not a retard. I just wanted to make sure I understood what I was reading. In that case I will make my VMs 32bit - since none of them have access to more that 4GB of RAM anyway. Thanks for explaining that!
    From Massimo

How to rewrite index.php (and other valid default files) to the document root using mod_rewrite?

Hello,

I would like to redirect index.php, as well as any other valid default file (e.g. index.html, index.asp, etc.) to the document root (which contains index.php) with something like this:

RewriteRule ^index\.(php|htm|html|asp|cfm|shtml|shtm)/?$ / [NC,L]

However, this is of course giving me an infinite redirect loop. What's the right way to do this?

If possible, I'd like to have this work in both the development and production environment, so I don't want to specify an explicit url like http://www.mysite.com/ as the target.

Thanks!

  • I think something like this should work. Running off to a meeting, so i don't have time to test it, but basically you have to set condition not to redirect main www.site.com/index.php

    RewriteEngine On
    RewriteCond !^/index.php
    RewriteRule ^index\.(php|htm|html|asp|cfm|shtml|shtm)/?$ / [NC,L]
    
    TMG : @solefald - Thanks for the shot, that still doesn't seem to work. (I get a server 500 error with the following: RewriteCond !^index.php RewriteRule ^index\.(php|htm|html|asp|cfm|shtml|shtm)/?$ /ri2/ [NC,L] (/ri2 is my development directory). I also tried escaping the . and using ^/index\.php but still get 500. Also, how would I rewrite for index.php itself? Thanks for any advice.
    solefald : Try adding `$1` after `/ri2/`. `RewriteRule ^index\.(php|htm|html|asp|cfm|shtml|shtm)?$ /ri2/$1`. Also you may want to enable mod_rewrite logging, but it wont work out of .htaccess, you need to put it into `httpd.conf` or whatever config your apache is using. `RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 3`
    From solefald
  • You can try 301 redirecting it. This should be theoretically search engine friendly.

    RewriteRule ^index\.(php|htm|html|asp|cfm|shtml|shtm)/?$ / [R=301,L]
    
    TMG : Well, I don't know very much about server configuration (that's why I need you guys), but I do know SEO. Your statement is inaccurate; 301s DO LOSE PR: (see matt cutts interview with Eric Enge 1/25/10: "There is some loss of PR through a 301". http://www.stonetemple.com/articles/interview-matt-cutts-012510.shtml)
    Karol Piczak : Probably not all Google employees are unanimous (http://tinyurl.com/adam-lasnik-on-301) ;-) But I'm not a SEO specialist, so I won't argue here. The point is - even if you lose some PR (it's same domain here, so I'm not sure if that's what Eric meant), what's the correct way to do it? I supposed your goal is to redirect all requests to root - not to allow access to the main page through .htm/.html/... (duplicate content?). Maybe I misunderstood you here.

Who provides the best and/or most affordable VPS hosting for FreeBSD guests?

I've seen plenty of recommendations for Linux VPS hosting, but not as much for FreeBSD. Who's the Linode/Slicehost of the BSD world? As an added bonus, who provides cheap but serviceable BSD hosting?

  • Csoft.net - Reliable, affordable, knowledgeable.

    Maybe not the cheapest, but you usually get what you pay for.

    Hank Gay : Is that actually VPS hosting? Most of the info on the site seemed more like shared (probably using jails) hosting.
    Chris S : It depends on the plan you get. They have everything from VirtualHost type plans, to VPS, to full Private Servers. The "Advanced" and "Corporate" plans are VPS (It does actually mention that in the details of the plans).
    From Chris S
  • First-hand recommendation: Rootbsd.net. They are part of a larger Linux hosting house, but their FreeBSD expertise is quite high. I have had a few in-depth conversations with tech support, and always come away impressed.

    Craig : I have also had good luck with Rootbsd.net
  • While not providing a FreeBSD solution directly, the guys at http://prgmr.com let you install it. They have instructions on how to install NetBSD, and you can try adapt them for FreeBSD. They use xen, so only 8.0 i386 for the moment.

    Chris S : From PRGMR's own wiki "Xen support is in its infancy, but the stuff in 8-CURRENT only works on Xen hypervisor version 3.2 and above, so this should work with the newer prgmr.com servers running Xen 3.3, however, I don't know of anyone successfully working on it."
  • I've also been using Rootbsd.net for a few years, and they're great.

  • Rootbsd.net is the service I've seen recommended.

    From efk