Saturday, February 12, 2011

How can I reliably discover the full path of the Ruby executable?

I want to write a script, to be packaged into a gem, which will modify its parameters and then exec a new ruby process with the modified params. In other words, something similar to a shell script which modifies its params and then does an exec $SHELL $*. In order to do this, I need a robust way of discovering the path of the ruby executable which is executing the current script. I also need to get the full parameters passed to the current process - both the Ruby parameters and the script arguments.

UPDATE: The Rake source code does it like this:

  RUBY = File.join(Config::CONFIG['bindir'], Config::CONFIG['ruby_install_name']).
    sub(/.*\s.*/m, '"\&"')

But I'll leave this question open in case anyone has an alternative version.

  • If you want to check on linux: read files:

    • /proc/PID/exe
    • /proc/PID/cmdline

    Other useful info can be found in /proc/PID dir

    From VitalieL
  • for the script parameters, of course, use ARGV :) -r

    From rogerdpack

MMORPG Client/Server Coding

How are the UDP and TCP protocols used in MMORPG client/server communication?

For example:

Does the client broadcast (player position, etc) via UDP to the server? or vice versa?

Or is it more like using TCP where the Client requests that the server move the player. The server receives the request, moves the player and sends back to the client that the player is now at position xyz?

The chat channels must be implemented using TCP?

Are there any good articles/books on this? I've found bits and pieces but it seems the real meat and potatoes are won from experience.

  • Half of your question (transport layer protocols used) could be answered by installing wireshark and looking at the traffic.

    From jj33
  • I don't know any details other than observations as a player, but most game most definitely do not wait for a server reply to move a character, that would kill the user experience unless it was turn-based. What looks like happens is the movement is done client-side and sent to the server which then sends those messages to other players. At least in WoW, if a player is lagging you may see them still moving forward then magically appear at another location later, which says to me that the client receives more than location data, but also that they are moving and the direction they were moving and then extrapolates the movement in absence of further data.

    From Davy8
  • Your best bet is probably to take a look at Planeshift's networking code, it's an open source MMO. I believe it's the most developed on the scene(last I checked).

  • A lot of games use UDP for movement related activities--so, like, when you are walking, chances are, a bunch of UDP requests are being sent. The server still ultimately controls whether that's valid, but you don't necessarily care whether every single packet gets to the server. This is why a lot of game clients also use some kind of prediction mechanism.

    In terms of your second mention, yes, it's very common for all control to be managed by the server. You don't want clients to be broadcasting anything to the server; you should do error and input handling server side to prevent people from hacking. You might also limit input per second.

    Anyway, a combination of UDP and TCP would be appropriate--you just need to ask yourself, "Do I want reliability or speed?"

  • There are many different possible implementations, but for the most part, they'll look like this. This pattern is repeated with almost any action in the game world.

    1. The client communicates to the server that the player wants to move.
    2. The client displays the player moving according to what it thinks should happen.
    3. The server validates that the move is something that could happen, given the location of the player.
    4. The server updates the client as to where the player is as far as the server is concerned.
    5. The client updates the players position to reflect the server's worldstate.
  • You may be interested in Project Darkstar. It's an open source MMO framework.

    From Tom Ritter
  • You can't rely on the client to pass in truthful information. Someone will hack the protocol and cheat. Encrypting the data won't stop this - just make it a little harder to do.

    The client should only send in requests for movements etc and the server needs to sanity check the requests to make sure that they don't violate the game rules. The server should only send data back that the client absolutely needs - you can't rely on the client to get a chunk of world data and just filter out everything that the player can't currently observe. Someone will get hold of the extra information and exploit it.

    If the game needs to be 'real-time' then the client needs to assume that the server will allow the movement requests and update the display accordingly - and roll-back the movement if the server corrects it later. Under most conditions the client and server will agree and everything will flow smoothly. They won't agree when the client is attempting to cheat (which is their fault anyway) - or the client is lagging badly due to a poor connection (not much you can do about that).

  • I don't think there's a brief single answer to this question, it's quite wide in its scope. Still, a few points:

    • There's no need to "broadcast" just because you're using UDP. UDP is point-to-point most of the time, in my experience.
    • It's perfectly possible to do your own "secure" communications over UDP, you don't have to use TCP. It's not magical, just ... clever and intricate. :) But for the most part, as you imply, TCP is not suitable for real-time-ish communications in games.
    • There are ways to make TCP more suitable, search for "Nagle algorithm" for instance.
    • You can do chat over UDP, if you've already rolled your own lossless transport protocol on top of it. Many games to this.

    There have been articles about networking in Gamasutra, but I don't have any links handy right now. Not ever sure if they're still openly available, sorry.

    From unwind
  • I think you can learn a lot from reading how others have implemented these types of systems. In that vain, may I point you to the work of Tim Sweeney and The Croquet Consortium

    1. Unrea Networking Architecture
    2. The Croquet Project

    Tim Sweeney's papers transformed the way I thought about programming. I can't recommend them enough.

Best version control system for a non-networked environment?

I am mentoring the programming group of a high school robotics team. I would like to set up a source control repository to avoid the mess of manually copying directories for sharing/backups and merging these by hand. The build location will not usually have network access, so this has led me to distributed version control systems (DVCS), which I am not familiar with.

The largest requirements are the following:

  1. Works in Windows XP and Vista. (absolute must)
  2. Changes can be committed locally. (Seems to be the case with all DVCS's)
  3. Repositories from multiple machines can be merged without network access. (Possibly by storing the repository on a USB drive and swapping the drive to another machine, then merging from there)

It should also be easy to learn and use, preferably through a graphical UI, as I am working with high school students who have never used a version control system.

Any suggestions as to which DVCS fits this the best.

EDIT:

Thanks for the answers. Mercurial looks pretty good, but does it support merging repositories from one directory to another, or do I have to set up a local network to merge across?

  • Git is lovely, but its Windows support is lax in the extreme (even with MSysGit). I would recommend Mercurial. I haven't actually used it myself, but I have heard that its Windows support is quite usable. Also, it has a slightly easier learning curve for people coming from traditional VCS (like SVN).

  • Mercurial is pretty easy to use on both Windows and Linux. TortoiseHg is a gui front end for Windows that integrates into explorer; It works fine. Both are Open Source. It is my understanding that using Git under Windows is less simple.

    Thanks for the answers. Mercurial looks pretty good, but does it support merging repositories from one directory to another, or do I have to set up a local network to merge across?

    Mercurial/TortoiseHg will do this (and more), as will all of the other distributed version control tools (as far as I know). I believe it will solve your problem. It is a DVCS and, with TortoiseHg, it is easy to use on Windows. Other distributed version control tools probably will work too (bzr for example), but I have less experience with them. Subversion (svn) is a centeralized version control tool. With some work-arounds you could get it to function in your environment, but it really does not address the issues you want solved. I have no idea why other responders are suggesting it.

    From ejgottl
  • I found SVN to be amazingly simple to set up and use, especially for a single user!

    One thing that I found really interesting--the ssh+svn protocol used SSH's ability to run a command line on the remote system to actually start SVN, so there was actually NO setup at all on the server outside creating a directory for your repository.

    SVN has a lot of shells if you don't like CLI (TortiseSVN on windows)--so it's as easy to use as anything else.

    From Bill K
  • I guess Mercurial would well fit your needs.

    • It's written in Python, so you won't have any problems running it under Windows.
    • Changes can be easily applied locally using patches or bundles
    • There are a couple of GUI tools available (look for TurtoiseHG)

    Mercurial is the VCS I'm personally using. Really easy to learn. You definitely want to try it over Git since it's support for Windows is way better.

    From skinp
  • I love bazaar: http://bazaar-vcs.org/

    It has all what you ask for and it is very easy to use.

    From Davide
  • Thanks for the answers. Mercurial looks pretty good, but does it support merging repositories from one directory to another, or do I have to set up a local network to merge across?

    Yes it does. All the DVCSes support this. The only difference between this and merging from a server is typically that you pass the appropriate command a local file path instead of a URL.

    In Mercurial it looks like this:

    Assume you have two repositories, one on your hard disk (C:\Project) and one on your flash drive (F:\Project):

    cd C:\Project

    hg pull F:\Project

    Similarly if you have two repositories on the same system:

    cd C:\Project

    hg pull C:\Project1

    If there's a need for a merge, it will create two "heads" and ask you to merge them with hg merge.

    added 1 changesets with 1 changes to 1 files (+1 heads)

    (run 'hg heads' to see heads, 'hg merge' to merge)

    Notice that that's the same as if you pulled and merged from a server.

    From quark

How do you do exponentiation in C?

I tried "x = y ** e", but that didn't work.

  • use the pow function (it takes floats/doubles though).

    man pow:

       #include <math.h>
    
       double pow(double x, double y);
       float powf(float x, float y);
       long double powl(long double x, long double y);
    

    EDIT: BTW, for the special case of positive integer powers of 2, you can use bit shifting: (1 << x) will equal 2 to the power x. There are some potential gotchas with this, but generally it would be correct.

    From Evan Teran
  • To add to what Evan said: C does not have a built-in operator for exponentiation, because it is not a primitive operation for most CPUs. Thus, it's implemented as a library function.

    Also, for computing the function e^x, you can use the exp(double), expf(float), and expl(long double) functions.

    Note that you do not want to use the ^ operator, which is the bitwise exclusive OR operator.

    John Rudy : I'm just learning C, and that ^ threw me for a major loop at first. I'm beginning to "get it" now, but your reminder is very valuable for me and (I'm sure) hundreds more like me. +1!
  • or you could just write the power function, with recursion as a added bonus

    int power(int x, int y){
          if(y == 0)
            return 1;
         return (x * power(x,y-1) );
        }
    

    yes,yes i know this is less effecient space and time complexity but recursion is just more fun!!

    From Mark Lubin
  • pow only works on floating-point numbers (doubles, actually). If you want to take powers of integers, and the base isn't known to be an exponent of 2, you'll have to roll your own.

    Usually the dumb way is good enough.

    int power(int base, unsigned int exp) {
        int i, result = 1;
        for (i = 0; i < exp; i++)
            result *= base;
        return result;
     }
    

    Here's a recursive solution which takes O(log n) space and time instead of the easy O(1) space O(n) time:

    int power(int base, int exp) {
        if (exp == 0)
            return 1;
        else if (exp % 2)
            return base * power(base, exp - 1);
        else {
            int temp = power(base, exp / 2);
            return temp * temp;
        }
    }
    
    Evan Teran : it'll work fine if you cast you int to a double/float and then back to int.
    ephemient : Inefficient, though, and rounding error *will* make a difference when the result gets near INT_MAX.
    From ephemient
  • The non-recursive version of the function is not too hard - here it is for integers:

    long powi(long x, unsigned n)
    {
        long  p;
        long  r;
    
        p = x;
        r = 1.0;
        while (n > 0)
        {
            if (n % 2 == 1)
                r *= p;
            p *= p;
            n /= 2;
        }
    
        return(r);
    }
    

    (Hacked out of code for raising a double value to an integer power - had to remove the code to deal with reciprocals, for example.)

    ephemient : Yes, O(1) space O(log n) time makes this better than the recursive solution, but a little less obvious.

SQL - Find where in a query a certain row will be

I'm working on a forums system. I'm trying to allow users to see the posts they've made. In order for this link to work, I'd need to jump to the page on the particular topic they posted in that contained their post, so the bookmarks could work, etc. Since this is a new feature on an old forum, I'd like to code it so that the forum system doesn't have to keep track of every post, but can simply populate this list automatically.

I know how to populate the list, but I need to do this:

Given a query, where will X row within the query (guaranteed to be unique by some combination of identifiers) appear? As in, how many rows would I have to offset to get to it? This would be in a sorted query.

Ideally, I'd like to do this with SQL and not PHP, but if it can't be done in SQL I guess that's an answer too. ^_^

Thanks

  • The thing about databases is that there is no real "order" to them. You can use the SCOPE_IDENTITY operator to return the unique ID of the inserted record, then write some sort of function to paginate until that record is found.

  • If you're using MSSQL, you could use ROW_NUMBER() function to add an auto-incrementing number to each row in a query.

    I don't know what good that would do you though. But it will do what you asked -- assign a number to the position of a row within the result set of a given query.

    If this is written in ph though, you're probably using mySQL.

  • hmm this solution makes a few assumptions, but i think it should work for what you're trying to do if i understand it correctly:

    SELECT count(post_id) FROM posts
      WHERE thread_id = '{$thread_id}' AND date_posted <= '{$date_posted}'
    

    this will get you the number of rows in a particular thread (which i assume you've pre-calculated) which are equal to, or earlier than the date posted (the specific user post in question).

    based on this information (say 15th post in that thread), you can calculate what page the result would be on based on the forums paging values. ie

    // dig around forum code for number of items per page
    $itemsPerPage = 10; // let's say
    $ourCount = getQueryResultFromAbove(); 
    
    // this is the page that post will be on
    $page = ceil($ourCount / $itemsPerPage);
    
    // for example
    $link = '/thread.php?thread_id='.$thread_id.'&page='.$page;
    
    Cervo : I like this because it doesn't get all the rows in one query...
    Nicholas Flynt : Oohhh... clever indeed. Thanks!
    From Owen
  • Expanding on Troy's suggestion, you'd need a sub-query, basically,

     select row_number() OVER(ORDER BY MessageDate DESC) 
     AS 'RowNum', * from MESSAGES
    

    then put an outer select to do the real work:

      select RowNum, Title, Body, Author from (
      select row_number() OVER(ORDER BY MessageDate DESC) 
      AS 'RowNum', * from MESSAGES)
      where AuthorID = @User
    

    Use rownum to calculate the page number.

  • I agree with Troy, you probably are going around this wrongly, to fix it we'd have to know more details, but in any case in MySQL you can do that like this

    SET @i=0;
    SELECT number FROM (SELECT *,@i:=@i+1 as number FROM Posts 
    ORDER BY <order_clause>) as a WHERE <unique_condition_over_a>
    

    In PostgreSQL you could use a temporary sequence:

    CREATE TEMPORARY SEQUENCE counter;
    SELECT number FROM (SELECT *,nextval('sequence') as number FROM Posts 
    ORDER BY <order_clause>) as a WHERE <unique_condition_over_a>
    
  • I think you mean something like this (MySQL)?

    START TRANSACTION;
    
    SET @rows_count = 0;
    SET @user_id = ...;
    SET @page_size = ...;
    
    SELECT 
         @rows_count := @rows_count + 1 AS RowNumber
        ,CEIL( @rows_count / @page_size ) AS PageNumber
    FROM ForumPost P
    WHERE 
        P.PosterId = @user_id;
    
    ROLLBACK;
    
    James Curran : You jump from @row_number to @row_count, but ignoring that, wouldn't it just be easier to initialize @row_count to 0, and skip the IFNULL?
    Kris : Yes, you are right. but in my defense; I was too low on caffeine when i wrote this.
    From Kris
  • Most SQL platforms have a proprietary extension of IDENTITY columns or sequences that increment with every item in a table. Most also have temporary tables.

    CREATE TABLE OF QUERY RESULTS WITH IDENTITY COLUMN

    INSERT INTO TABLE
    QUERY
    ORDER BY something

    then the identity column is the number in the query and it tells you how many entries before/after it.

    The important thing is to order by something. Otherwise you may get different orders each query in which case your number means nothing...

    From Cervo

window.ScrollMaxY or X - How to set in FireFox 3?

window.scrollMaxY can be set via that property in IE and older versions of Firefox, but when trying in FF3 it says "Cannot set this property as it only has a getter".

What is my alternative?

EDIT:

The reason why I'm asking is that I'm fixing some very horrible JS written by someone else, it has a function to keep a div centered on the page while scrolling, and has this line:

// Fixes Firefox incrementing page height while scrolling
window.scrollMaxY = scrollMaxY

Obviously this doesn't work, but the main issue is that when the page is scrolled, it grows in length.

  • window.scrollMaxY can be set via that property in IE and older versions of Firefox

    I don't see that this exists in IE at all.

    If i try to modify it in FF3 before reading its value, then i am able to do so, although changing it has no visible effects.

    If i assign a value to it prior to accessing its value, i'm able to do so. Once i've successfully assigned a value to it, i can query and modify its value as much as i like, though the browser will no longer update it to reflect the actual scroll limit of the window - this behavior would appear to exist for compatibility with code that might use this variable name in other browsers, not expecting it to be pre-defined.

    What would you expect modifying it to do?

    (testing in IE6 / FF3, answer revised to note pre-query vs post-query behavior)

    From Shog9
  • Why would you want to set it anyway? This property is automatically computed by the browser and contains the maximum vertical or horizontal scroll position possible.

    To scroll the window, you use window.scroll(x, y). When calling window.scroll(0, window.scrollMaxY), for example, the window is scrolled to the bottom of the page.

  • Sounds like what you are looking for is different CSS. Instead of trying to bend the browser to fit the HTML, it would be easier to find a better solution for the actual problem; keeping the div in place.

    To position elements relative to the window use position: fixed;

    My guess is that the code you are looking at was originally a workaround for the lack of support for fixed positioning (IE6 doesn't support it).

    From Borgar

Help finding C++ interval tree algorithm implementation

I'm trying to find an efficient C++ interval tree implementation (mostly likely based on red black trees) without a viral or restrictive license. Any pointers to a clean lightweight standalone implementation? For the use case I have in mind, the set of intervals is known at the outset (there would be say a million) and I want to be able to quickly obtain a list of intervals that overlap a given interval. Thus the tree once built will not change -- just needs rapid queries.

  • The C++ standard library offers red/black trees std::map, std::multimap, std::set and std::multiset.

    Really, I can't think of any way to handle this more efficiently than to keep a std::map of iterator pairs, and passing those iterator pairs to upper_bound() and lower_bound(). You'd want the iterator pairs to be kept in a map themselves so that you could easily zero in on which pairs are likely to be in the interval (if the beginning iterator in the "candidate interval" comes after the end of the given interval you're looking at, then you can skip checking that -- and all later -- iterator pairs).

  • There is a version presented at C++ code for Red-Black Trees And Interval Trees, and a trimmed down implementation at my homepage.

.NET AJAX 1.0 Aysc Callback Modifies Form Action When Server.Transfer is Used

I have a web form that I am attempting to implement dynamic drop down lists on using the .NET AJAX 1.0 extensions. I have successfully implemented the needed bits, but have an interesting quirk.

When I select a value from my first drop down list, my call back happens and my page is updated correctly. The next value I select, I receive the following error:

Sys.WebForms.PageRequestManagerServerErrorException: An unknown error occurred while processing the request on the server. The status code returned was: 404

Reguarless of what control I use first, the first request works and the second does not. Looking at my IIS logs, I see the following lines:

2008-10-17 14:52:14 W3SVC1 127.0.0.1 POST /Aware/Xtend/mParticipant/NewPlannedService.aspx WIN=Participant_1224255079212&Title=Participant 80 - 127.0.0.1 200 0 0

2008-10-17 14:52:20 W3SVC1 127.0.0.1 POST /Aware/mParticipant/NewPlannedService.aspx WIN=Participant_1224255079212&Title=Participant 80 - 127.0.0.1 404 0 0

As you can see my post URL has completely changed. Using Fiddler to watch the request/response, I can see this in the response from the server:

|formAction||NewPlannedService.aspx|

This is simply the name of the page that is being executed, the relative path and query string has been dropped off.

I can resolve this issue by adding the following to the end of my Async callback method:

this.Form1.Action = Request.Url.PathAndQuery

But this seems incredibly lame and smells somewhat like moldy cheese to me. Can any one point me in the right direction?

UPDATE: Upon further inspection I discovered that NewPlannedService.aspx was not the original executing page. Page1.aspx was executing and then called Server.Transfer("/folder/NewPlannedService.aspx"). So the URI in the browser was http://whatever.com/Page1.aspx, but the actual page that was executing was http://whatever.com/folder/NewPlannedService.aspx

  • To solve this issue, I created a javascript file called Ajax.Server.Transfer.Fixer.js with the following code:

    var orginalFormAction = null;
    
    //capture the current form action value
    function BeginRequestHandler() {
      orginalFormAction = theForm.action;
    }
    
    //set the form action value back to the
    //correct value
    function EndRequestHandler() {
      theForm.action = orginalFormAction;
      theForm._initialAction = orginalFormAction;
    }
    
    function RegisterRequestHandlers() {
    
      if (typeof (Sys) != "undefined") {
    
        Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequestHandler);
        Sys.WebForms.PageRequestManager.getInstance().add_beginRequest(BeginRequestHandler);
        Sys.Application.notifyScriptLoaded();
    
      }
    }
    
    //register request handlers after the application 
    //has successfully loaded.
    Sys.Application.add_load(RegisterRequestHandlers);
    

    Then added the following line to my Page_Load event:

    protected void Page_Load(object sender, EventArgs e)
        {
          PageScriptManager.Scripts.Add(
            new ScriptReference("~/Script/Ajax.Server.Transfer.Fixer.js")
            );
        }
    
    From NotMyself

Copy files to clipboard in C#

I have a WinForms TreeView (node, subnodes). Each node contains in it's Tag some additional info. Also, each nodes maps a file on the disk. What's the easiest way copy/cut/paste nodes/files in C#? Would be nice to have some sample code. Thanks

  • Consider using the Clipboard class. It features all the methods necessary for putting data on the Windows clipboard and to retrieve data from the Windows clipboard.

    StringCollection paths = new StringCollection();
    paths.Add("f:\\temp\\test.txt");
    paths.Add("f:\\temp\\test2.txt");
    Clipboard.SetFileDropList(paths);
    

    The code above will put the files test.txt and test2.txt for copy on the Windows Clipboard. After executing the code you can navigate to any folder and Paste (Ctrl+V) the files. This is equivalent to selecting both files in Windows Explorer and selecting copy (Ctrl+C).

    smink : Clipboard is only a placeholder. When data is transferred to the clipboard there is no information about whether this is a copy or a cut operation. It is the responsibility of the calling operation to distinguish one from the other based on saved state.
    smink : For example when you cut on Word the text cutted is immediately removed from the document. The cutted text is placed on the Windows clipboard and can then be pasted on demand.
    From smink
  • If you are only copying and pasting within your application, you can map the cut/copy operation of your treeview to a method that just clones your selected node. Ie:

    TreeNode selectedNode;
    TreeNode copiedNode;
    
    selectedNode = yourTreeview.SelectedNode;
    
    if (selectedNode != null)
    {
        copiedNode = selectedNode.Clone;
    }
    
    // Then you can do whatever you like with copiedNode elsewhere in your app.
    

    If you are wanting to be able to paste to other applications, then you'll have to use the clipboard. You can get a bit fancier than just plain text by learning more about the IDataObject interface. I can't remember the source but here's something I had in my own notes:

    When implemented in a class, the IDataObject methods allow the user to store data in multiple formats in an instance of the class. Storing data in more than one format increases the chance that a target application, whose format requirements you might not know, can retrieve the stored data. To store data in an instance of IDataObject, call the SetData method and specify the data format in the format parameter. Set the autoConvert parameter to false if you do not want stored data to be converted to another format when it is retrieved. Invoke SetData multiple times on one instance of IDataObject to store data in more than one format.

    Once you've populated an object that implements IDataObject (e.g. something called yourTreeNodeDataObject), then you can call:

    Clipboard.SetDataObjecT(yourTreeNodeDataObject);
    
    From AR
  • this is sweet! thanks so much!

    um...how would you cut a file onto the clipboard?

    Cheers, Mark

    From Mark

Can someone post a well formed crossdomain.xml sample?

I've been reading that Adobe has made crossdomain.xml stricter in flash 9-10 and I'm wondering of someone can paste me a copy of one that they know works. Having some trouble finding a recent sample on Adobe's site.

  • This is what I've been using for development:

    <?xml version="1.0" ?>
    <cross-domain-policy>
    <allow-access-from domain="*" />
    </cross-domain-policy>
    

    This is a very liberal approach, but is fine for my application.

  • http://www.adobe.com/devnet/flashplayer/articles/fplayer9_security.html

  • If you're using webservices, you'll also need the 'allow-http-request-headers-from' element. Here's our default, development, 'allow everything' policy.

    <?xml version="1.0" ?>
    <cross-domain-policy>
      <site-control permitted-cross-domain-policies="master-only"/>
      <allow-access-from domain="*"/>
      <allow-http-request-headers-from domain="*" headers="*"/>
    </cross-domain-policy>
    
    From ThePants

Creating Visual Studio toolbar commands to execute batch files

I have a few batch files I need to run frequently in developing a certain project. I'd like to create a Visual Studio toolbar called "MyProject" and have commands underneath to execute these batch files. What is the easiest way to accomplish this?

  • In the Tools... menu, select External Tools... and add references to the batch files. Then right-click on a toolbar, select Customize..., go to the Toolbars tab, click on New..., name your new toolbar, click on OK, go to the Commands tab, select the Tools category and drag-drop the appropriate External Command Command onto your custom tool bar.

    If you need to run batch files that always run right before or after a build, you're probably better off making use of build events.

Validating Web Pages

I have been developing websites for a couple years now and I almost never check if my pages are valid html and css. My check is by using a site such as browsershots.org and checking how it looks in all the different browsers. However recently I have been taking a college course and the prof wants us to validate every thing we turn in. It got me to thinking.

should I care if my pages validate or not?

  • Yes. Your teacher may reduce your grade otherwise.

    From Dimitry Z
  • Just checking that your webpage looks good in different browsers might seem to work now, but in the future web browsers will change and your page might not look right anymore. If your pages are valid HTML and CSS, however, newer browsers should display them correctly in the future.

    From yjerem
  • I find that validation is a matter of principle more than what is required. It's very hypocritical to slate IE for its lack of standards compatibility when we still nowadays only test for the popular browsers.

    ALWAYS check to see if your page is valid, always.

    From EnderMB
  • Well, this is almost holy war territory. If you are having trouble with CSS, making sure your HTML and CSS validate is a really good diagnostic step. It can sometimes cause accessibility problems if your HTML is very badly munged. Otherwise, there aren't really any practical reasons to worry about it.

    Taking care in your work and pride in your craftsmanship, though, that's something else. If your pages validate, it's like a little gold star and you get a warm fuzzy feeling for a few seconds. It's a best-practice.

    If you like doing things the absolutely right way, then sure, care about it.

    From Flubba
  • you can (and should) validate your CSS/HTML

    beyond getting better grades, some projects / industries will require validation for various purposes. if you're interested in a future career in any of these sectors may as well start now :)

    From Owen
  • Webpage validation is, in my mind, a complex matter. On the one hand, you have the W3C recommendation - that is just that: a recommendation - that might or might not (probably not) render equally in all browsers. On the other hand, you have your CSS/HTML tweaks and hacks that make each page render pixel-perfectly, but most probably doesn't validate with he W3C validator.

    In real life, it's a world of compromises. I, personally, try to do both - have as few validation errors, but having the main emphasis on it actually looking good with widely used browsers.

    But, in an academic life, I think it's entirely fair for the professor to require 100% W3C compliance. It is, however, the closest that we have of a spec for HTML/XHTML, and that's what the academic people are ultimately interested in - the theory.

  • Yes standards are your defence in a changing world. Just because your site works with this crop of web browsers, there is not guarantees you will be good with the next if you are not standards compliant. Lets be honest web browsers will often be updated many times within the lifespan of a web site.

    As a diligent developer I am sure you will retest as browsers get updated but there is a window between the update and you testing (remediating). :-)

  • I always validate my web pages, and I recommend you do the same BUT many large company websites DO NOT and cannot validate because the importance of the website looking exactly the same on all systems requires rules to be broken.

    In general, valid websites help your page look good even on odd configurations (like cell phones) so you should always at least try to make it validate.

  • Yes, for the reasons already covered here.

    I realise you are not necessarily talking about commercial websites, but it is good to act as if you are anyway.

    From da5id
  • Validate to make sure you didn't make mistakes. If the validator complains about something you had to put in for browser compatibility, you can probably ignore that.

  • Absolutely! Your sites should be valid! Valid HTML/CSS is much more likely to work in future browsers 10 years from now!

  • As Eric said, a lot of big websites don't validate, however, if you start with a validating website that works perfectly in, for example, Firefox, Safari/Chrome, or Opera, chances are it will be right or mostly right in the other 3 and will only require minor adjustments for them to be right.

    Then you can work on any hacks that might be needed for Trident based browsers like IE. For the majority of general websites, the hacks needed to make things work in IE7 and IE6 will still be valid.

    Once you are at that point, it's easy to debug any problems and then start making any adjustments/hacks that don't validate.

    It's easy to determine what browser is being used (assuming it's sending the correct user agent) with PHP if you can use server side programming, or alternatively with JavaScript on the client side, and then you can load a specific stylesheet on top of a regular stylesheet for each browser. IE6 and 7 have HTML hacks that don't require any extra work to load specific stylesheets for them, but there isn't anything available for the Gecko, Webkit or Presto browsers (Firefox, Safari/Chrome or Opera) so an alternate method is needed for anything specific to these browsers.

    From Matt
  • I think validating your pages is a good thing, however there are a few myths around this.

    Firstly, you probably don't want to be doing XHTML, as you are in variably serving it incorrectly to browsers. Stick with HTML 4.0 Strict.

    Secondly, having valid pages is nice, and useful, but don't beat yourself up over it, and don't validate other people's pages -- some people really enjoy this, and it only ever comes across as mean.

    Finally, try to 'fill' the text in your pages so that your lines are not hugely long. It is unrelated to validating, but helps a human reader of your HTML.

    From mattl

Record Component Video

Hi,

I've found plenty of simple, cheap solutions for recording composite video sources to my PC, a laptop. But I'd like to be able to record in HD and use component cables.

I was wondering if anyone had a good solution for this.

Required:

Cheap

Desired

Portable, external hardware device
  • Depends on your definition of cheap, but there is the HD PVR from Hauppauge which works well from ~ $200.

    From Rob Walker
  • I was hoping for some cheaper, but that's okay. I've researched this some more on my own and the Hauppauge device was the best I could find too.

    Thanks!

Display a loading icon while a network resource is being downloaded

I'm trying to display a loading icon while my iPhone app downloads a network resource, but I can't figure out how to make it show up correctly.

I searched around and found some details on the UIActivityViewer class, but the available example source code didn't work, and the documentation is kind of terse.

Could someone provide a simple example on how to use this class?

  • Assuming you've got a view controller set up, and would like to add a UIActivityIndicator to it. Here's how you could do it

    //assume you've got a member variable called indicator, which you can use later to clean up:

    For your interface (.h file):

     UIActivityIndicator   *indicator;
    

    For your implementation (.m file):

    Start the Animation

    CGRect                  b = self.view.bounds;
    indicator = [[UIActivityIndicator alloc] initWithActivityIndicatorStyle: 
                                                 UIActivityIndicatorStyleWhite];
    //center the indicator in the view
    indicator.frame = CGRectMake((b.size.width - 20) / 2, (b.size.height - 20) / 2, 20, 20); 
    [self.view addSubview: indicator];
    [indicator release];
    [indicator startAnimating];
    

    Stop the Animation

    [indicator removeFromSuperview];
    indicator = nil;
    

  • Ben,

    Thanks for the answer. I tried your code sample, but the final app doesn't actually display the activity view at all. If I don't include the "stop animation" part of the process, I can see the icon spinning away after the new image is downloaded from the server.

    I'm starting to guess here that I need to load the image in a separate thread, so the UI itself can continue to function in a way while the resource is downloaded.

  • By the way, for those users/visitors that come to this thread later on, the class name is actually named UIActivityIndicatorView.

  • Ben answer looks pretty similar to what I'm doing - your guess about the thread is probably accurate. Are you using NSURLConnection to handle your downloading? If so, are you using the synchronous or asynchronous version? If it's the synchronous version and you're simply starting and stopping the animation around the synchronous call, then the UI isn't updating until after the you've stopped the animation.

    From Jablair

Career future as Software Developer

It's around the time of my annual career review and each year I get asked what position I would like to get: Management? Architecture? Technical Expert?

So the big question here is what is the future for technical oriented people? I mean real engineers that understand 'engineering' as their life occupation.

Are you a 'Dilbert' like nerd? Which career path have you taken and what are the pro and cons of your path?

  • I think you're supposed to answer that one, not us ;-). Given an unsound economy I'd expect the most comfortable positions to be maintenance programming and middle management, and people who can apply themselves to a wide variety of tech (after all, your boss might switch everything to Linux/MySQL to "save money" tomorrow). But then, who wants stability if it isn't fun?

    Yaba : Sure - I have to decide for myself. Just wanted to get some hints from people that have decided for the one or other paths. I have updated my question to be more concise.
    From Graham Lee
  • I wrote a blog post about exactly this subject a little while ago: http://harriyott.com/2006/02/12-career-moves-for-developers-without.aspx. I identified 12 possibilities of future careers.

    From harriyott
  • I suggest you read "My job went to India" by Chad Fowler.

  • Sounds like you have to decide if you either put focus on learning everything a little bit to get the bigger picture (Architecture) or put focus on a certain area (Technical Expert).

    Either way, the future is working on yourself and get better with the things you do. The future of an engineer is learning how things can get better and how you yourself can get the things better.

    From Ansgar
  • Your choices are really up to you. I work for a big company and we have plenty of experience with these situations. Basically you have two 'paths' you could take without making any changes to your career.

    • You could become a 'manager'. I.e. you start by being a senior developer, then a team leader, then a project manager assistant, picking up the experience along the way. It does not mean that you don't get to be an engineer, but obviously you'll spend more time administrating things. I knew several people who turned down this path because they did not want to quit coding.

    • You could become an 'expert'. You get more and more technological prowess, you know more and more. Ultimately you become a guru who advice others, do code reviews and work on the 'crack teams' of experts if something has to be done really quick and really good. An variation on this is that you become an architect (System Architect or Database Architect) but that requires a certain frame of mind which not every one have. A downside of this path is that you never get that high up in the chain of command. But sometimes it's actually an upside from the personal point of view :)

    There are other paths: you could become a resource manager (almost like switching from the field operative to an office paperwork duty).

    You could also become a Analyst and start working with the clients to collect, analyse and process their requirements into the specifications for the development team. It is great fun but it requires an inclination to work with people a lot (and talk a lot I would say :).

  • The old what you are good at and what you like to do should come into play here for what position you want. What is the structure where you work? Do they have Software Developer I, SD II, SD III, ... SD X? I imagine some might but it varies.

    I can see a future of some technical oriented people being that they become consultants or teach various technical courses, e.g. J.P. Boodhoo does a Nothin' but .Net week long course where he shares his passion for what he does that is a rare sight among software engineers I've seen.

    A lot of people would say that I'm a Dilbert nerd and I have seen many times where what is written in those comics is painfully close to the truth.

    My career path hasn't been that long though I could go through the highlights and lowlights of it:

    1997 - Graduate from University with a Bachelor of Mathematics degree with majors in Computer Science and Combinatorics & Optimization. That name alone usually gets a "What is that?" or the eyes roll as it seems like something hoity-toity.

    1998 - Move to Seattle working for a dot-com doing web server development as a Software Design Engineer, under a NAFTA visa that gets switched to a H1-B the following year. The company was founded by some former Microsoft employees so the tools are all MS: MS-SQL Server, Visual Studio 6.0, IIS 3.0, Visual SourceSafe. Introduced to Hungarian notation that seems like a nice way to name some variables.

    1999-2004 - Worked at the dot-com through the boom and the bust, where there was a web team formed instead of being just an engineer or two as well as the shrinkage down to being a couple of guys in the founder's basement working with some Russians on the web site code. Server code went from C/C++ ISAPI Extensions using a propietary mark-up language to ASP in VBScript to ASP.Net 1.1 in C#. The company had an IPO in Feb. 2000 and by August 2000 had laid off 2/3 of the staff. Watched the Space Needle fireworks when Y2K occured as the office was a block from the Needle. Also in 2000, got to go to LA for the Spring Internet World which was cool.

    2004 - Move to a different company though still a dot-com that isn't profitable yet that is huge with over 60 people in IT alone. Much more formalized process though for the first few weeks, I never met my supervisor. I talked with the director of IT and co-workers for what to work on for that time, but I find it interesting to go so long before meeting a boss. Company restructures their IT and software development so I get moved around though I did like where I ended up. Still at a mostly Microsoft place where here I'm a front-end developer. Visa expired at the end of 2004 so I had to quit that job.... If I knew back in 1999 to start the green card process then, I might still be down in Seattle, but anyway...

    2005 - In Calgary start working for an Application Service Provider doing location-based services. Still a small team of developers that I join in helping build their application to do the tracking and change device settings, GIS, etc. Title here was Senior Application Developer. This was all ASP.Net 2.0

    2007 - Move on to a technology company where I'm in the IT department now being a web developer for the applications built internally or where we have to do customizations with the off-the-shelf still sometimes called "integration" though some code is VB6, some VBScript, some C#, some Javascript for that nice mix of almost everything is here. Company has been around for 20 years and is profitable for a couple of other differences between here and other workplaces.

    Process and communication have been a couple of big things I've watched over my years working and seen some good ways to do things and some scary ways to do things. Gone from ISDN lines and 56K modems to T1 lines to DSL and cable modems for connectivity and browsers from Netscape 4 and IE 4 to IE 7 and Firefox 3. Seen IIS go from being traffic lights for managing to the spiffy MMC for IIS 7.0 now.

    Design Patterns are likely the most awesome thing I have found over the years as I tend to be a hand coder for some things so I don't like having a Label1 for the ID on a Label as it is better to give it a more meaningful name.

    At least that is the short version of my career as a Web Developer.

    From JB King
  • Hi Yaba,

    I went from programmer to technical team lead to project manager to manager (10 people) back to programmer again. There are pros and cons to each of these positions. I think it's worth trying each of the different career paths just to see if you like one more than the other. If nothing else it will give you a better understanding of those with different roles in your team. For example, I learned that managers don't always just make dumb decisions because they are stupid - often they are being forced to make those decisions by their boss in order to keep their job... You can learn a lot about corporations (particularly, how many are strangely dysfunctional) if you climb the ladder a rung or two. In the end I found out that I am happiest when I am building things. It's my passion and it is far more enjoyable for me to do that than any of the other roles that I tried. But I am glad that I tried them.

    By the way, in terms of job stability in an uncertain economy, I disagree with the poster who said middle management was one of the most comfortable positions. Those folks are often the first to go. Focus on your hard technical skills if you want a bit of job security and make yourself "indispensable" to your employer. Be the one who is suggesting new and better ways to do things without being asked.

  • In my opinion the only way to know what path to take is to 'suck it and see'. You may be a talented artist, for example, by if you're never exposed to Art you'd never discover the potential you have inside.

    From mrwiki
  • This answer would seem unorthodox but, here goes.

    After six years of software development, I'm prepping myself to attend to the travel agency biz my wife and I started. I kid you not. It's the sort of dream setup I have: a business where the business is pleasure. It's hard work though, and my wife who runs it by herself (home-based) is starting to feel the crunch.

    That doesn't mean I'm going to quit developing software though. I'm going to continue to write software, this time for our business, and if I'm lucky enough I'd be able to sell it to other travel agencies (provided, of course, that the sale won't give them a significant competitive advantage over us... insidious I know!).

    The point is I'm choosing to focus on one core business competency as opposed to freelancing wherein you jump from one set of business rules to another every project. Domain expertise in one field would definitely be useful in coming out with quality products that actually meets the needs of that specific domain/industry. Quite different from trying to analyze business processes of totally unrelated industries every now and then.

    Another source of income I'm looking at would be teaching OOP, C# and .NET, on a part-time/per project/consultancy basis.

    From Jon Limjap
  • You should try all the possibilities and in the end, chose the one that makes you more "happy".

    Some people can be extremly good at programming and extremly bad as project leaders, it all depends on their personality, etc...

  • As for the management part, I found an excellent answer on Rands in Repose.

    From Yaba