Monday, March 7, 2011

Web Parts with a markup file?

I'm an ASP.NET web part novice. I've built a few simple ones using only a class that derived from WebPart and overriding the CreateChildControls method, but nothing really very substantial. My question is whether it's possible to have a web part that also takes advantage of a separate html/asp.net markup file that will help provide some structure to the web part's output. In the past I just created server controls and added them to the controls collection, but this seems like a silly way to try to create a non-trivial layout. Can I do this? Do I have to use an ascx user control or can I bypass that step? There are a lot of hello world tutorials on web parts out there, but none seem to go past the CreateChildControls override. Thanks!

From stackoverflow
  • Yes, there is. Go here to learn about templated web parts, and go here to see all of the info he has on WebParts. I used this technique back in 2004/2005 and it worked very well.

Whats the best way to resolve name conflicts in javascript?

I recently wrote some javascript code that filled a drop down list based on some XML, pretty simple stuff. The problem was I had to write similar code to do almost the same thing on a different page.

Because the code was almost identical I named most of the functions the same, thinking that they would never be included in the same page. However, naming conflicts arose because both javascript files were eventually included in the same HTML page.

When I had to go back and change the names I simply added first_ or second_ to the method's names. This was a pain and it doesn't seem very elegant to me. I was wondering if there is a better way to resolve name conflicts in javascript?

From stackoverflow
  • Going slightly object oriented and using namespaces should prevent such conflicts IMHO. I'm not much of a web developer, so there might be better ways to do it specifically for Javascript, but this is generally what's done.

  • Try the JavaScript module pattern (or namespaces) used in various libraries.

    Try to be DRY (don't repeat yourself) so you can avoid name collisions. If the code is almost the same you better avoid code duplication by creating a function which can handle both cases. The function can take two parameters: which dropdown to populate and with what data. This helps maintainability as well.

    update: I assume that you take the XML from an AJAX request. In this case you can create on-the-fly anonymous functions with the appropriate parameters for callback inside a loop.

  • I would look at how I could merge the two pieces of code (functions?) into a single function. If you need to populate a list box, then pass the list box id into the function, so you are not hard-coded to operate on one single control only...

    I did this on my rocket business's web site where I sold rocket motors with different delay values, but in essence, they were the same product, just a different delay value.

    Perhaps this might try and explain what I'm trying to say... I use this if an image file happens to be missing, it will display a "no image" image in place of the real image.

    function goBlank(image)
    {
      if(image) {
        var imgobj = document[image];
        imgobj.src="/images/blank.gif";
      }
    }
    

    In this case, you call it with:

    <img src="/images/aerotech.gif" name="header" onError="goBlank('header');">
    

    If you need more example with things like list boxes used, let me know. Perhaps even post some sample code of yours.

  • Another option (if possible) is to carefully tie the code to the element itself.

    e.g.

    <input type="text" name="foo" id="foo" value="World" onchange="this.stuff('Hello ' + this.value);"/>
    <script>
      document.getElementById('foo').stuff = function(msg){
        //do whatever you want here...
        alert('You passed me: ' + msg);
      };
    </script>
    
    LarryF : I like that. Kinda tricky, but I can see some uses for that...

Did isset work differently in older versions

Hey!

I got some legacy code that has this:

<?PHP
    if(isset($_GET['pagina'])=="homepage") {
?>
HtmlCode1
<?php 
} else { 
?>
HtmlCode2
<?php 
} 
?>

I don't know exactly why but this seems to be working. The htmlcode1 is loaded when I have ?pagina=homepage and the htmlcode2 is loaded when the pagina var doesn't exist or is something else (haven't really seen with something else, just not there). The website is using php4 (don't know the exact version). But really, how can this work? I looked at the manual and it says isset returns a bool..

Anyone?

From stackoverflow
  • isset() returns true or false. In a boolean comparison, "homepage" would evaluate to true. So essentially you got here:

    if ( isset($_GET['pagina']) == true )
    

    If pagina equals anything, you will see HtmlCode1. If it is no set, you will see HtmlCode2.

    I just tried it to confirm this, and going to ?pagina=somethingelse does not show HtmlCode2.

  • I suspect that it's a bug as it doesn't really make sense to compare true/false with "homepage". I would expect the code should actually be:

    if (isset($_GET['pagina']) && ($_GET['pagina'] == "homepage")) {
    }
    
  • Some ideas how this could work (apart from the previously mentioned "homepage"==true):

    • Isset has been redefined somewhere?
    • It's a self-modified version of PHP?
  • The problem is that "==" isn't a type-sensitive comparison. Any (non-empty) string is "equal" to boolean true, but not identical to it (for that you need to use the "===" operator).

    A quick example, why you're seeing this behavior:
    http://codepad.org/aNh1ahu8

    And for more details about it from the documentation, see:
    http://php.net/manual/en/language.operators.comparison.php
    http://ca3.php.net/manual/en/types.comparisons.php (the "Loose comparisons with ==" table specifically)

    AntonioCS : Thanks! This must be why it is working

Developing apps for the nokia n73 in Java?

Hello,

I'm trying to find out if it's possible to develop a key capturing application to analyse writing styles when using the SMS composer for the n73 using S60 2nd Edition, Feature Pack 3 SDK in Java? Originally, I thought that all Java applications would be sand-boxed making it difficult to call the native key capture functions available to symbian, but nobody has been able to clarify this for me. Does it hold any truth?

Thanks,

A

From stackoverflow
  • Note: This answer comes from my friend who knows a lot more about these things.


    As this J2ME FAQ states,

    Can I access the phone's j2me? (memory, phone book, inbox, pictures...) ?
    Generally no, this is considered a security risk and most manufacturers don't allow it. However, a very small minority do, so check out their developer's site.

    So that'd be no. There's no direct way in MIDP libraries to access that data anyway. It may however be possible if you're lucky, but don't count on it. Also according to Sun this may be possible in MIDP3.

  • I'll be talking as a user: Opera Mini (a java-based application) is able to read and write user data (phone memory and memory card).

    And I've also seen java-based applications that access hardware such as the phone's camera, and seen apps that call system APIs such as vibration or sound notifications.

    However, I don't know how they implemented those things.

    Note1: Nokia N73 is based on S60 3rd edition not 2nd edition.

    Note2: In some cases (such as accessing user data), user authorization is required, unless the application is signed using a certificate.

  • I've just started hacking about with my Nokia 6300 and J2ME. This phone supports MIDP2 and J2ME, but unlike the N73 - it's Series 40, and doesn't use symbian for it's operating system.

    One of the applications that I want to write, needs to make use of the phone book and as far as I'm aware this is possible using the PIM API (JSR-75).

    AFAIK you can actually access SMS messages using C++ under Symbian. Try this link

Testing C code using the web

I have a number of C functions which implement mathematical formulae. To-date these have been tested for mathematical "soundness" by passing parameters through command line applications or compiling DLL's for applications like Excel. Is there an easy way of doing this testing over the web?

Ideally something along the lines of:

  • compile a library
  • spend five minutes defining a web form which calls this code
  • testers can view the webpage, input parameters and review the output

A simple example of a calculation could be to calculate the "accrued interest" of a bond:

Inputs: current date, maturity date, coupon payment frequency (integer), coupon amount (double)

Outputs: the accrued interest (double)

From stackoverflow
  • The quickest thing I can think of is to have these C programs compiled on the server. And create a PHP page that received command-line parameters and then execute compiled program on the server, parsing the output. Technologies other than PHP would also work just fine. What you need to figure out, for specific technology, are:

    • How to start a process
    • How to redirect standard input/output

    I have also seen number of web site which let users submit their C code and then it get compiled on the server. After that the program will be given some input file and give output. The output of program is then verified with correct answer. For example visit this site, http://acm.timus.ru/

  • You should have a look into automated testing. Manual tests will all have to be repeated every time you change something in your code. Automated tests are the solution for your kind of tests. Let testers write test cases with the accompanying results, then make them into unit tests.

    See also: unit testing

    Avi : True, but this doesn't answer the question.
    Norman Ramsey : Sometimes a person just asks the wrong question.
  • Or similarly, create a Perl CGI that checks input values and then passes them through to the C program. BTW This should only be done for testing and not for final deployment.

    You should really automate the testing to check your behaviour is as expected over a wide range of values.

    Or shouldn't you be testing this in an environment that is as close as possible to the final deployment environment?

    cheers,

    Rob

  • This is what you're looking for:

    http://codepad.org/

    It will execute C, C++, D, Haskell, Lua, and many others online, and display the results.

    If you've got a large library to compile it may get unwieldy, but testing a function briefly is simply a matter of pasting the code and hitting "Submit".

  • If you're going to do this, you should be sure that every web interaction is captured in a permanent database of tests. Then you can use this database to

    • Automatically re-run all tests if the software changes

    • Possibly find inconsistencies that result if a person gives you the wrong answer

    In other words, the web form should be the front end to a persistent infrastructure for testing, not a means of running tests that disappear just after they are viewed.

  • This sounds much like FIT. You could probably make a new fixture for it, or for one of the other language ports like the Python one, that calls a C library with your function. This would take advantage of the work that's gone into making FIT convenient, the kind of work Norman Ramsey recommends in his answer.

Dynamically creating a project in VS2005 using C#

Is it possible to progmatically or dynamically create a project in VS2005 using c#? If so could someone provide some ideas or links on how to accomplish this?

From stackoverflow
  • Visual Studio Projects are "simply" XML files. I say simply because there is a lot going on in them. To get an idea of what they consist of: Create an empty project and open up the csproj(C#.NET project) or vsproj (VB.NET project) in a text editor.

    You could then generate this XML file in via code.

  • .csproj files are actually XML documents that represent msbuild executions.

    The files can be generated using System.xml, and if you need to validate it there's a schema detailed here.

  • You can also create and edit msbuild files programatically using the msbuild api. See here for details: http://msdn.microsoft.com/en-us/library/microsoft.build.buildengine.project.aspx

    Here's an example that creates a project that compiles a single .cs file and references an external assembly:

    Engine engine=new Engine();
    engine.BinPath=RuntimeEnvironment.GetRuntimeDirectory();
    Project project=engine.CreateNewProject();
    
    project.AddNewImport(@"$(MSBuildBinPath)\Microsoft.CSharp.targets", null);
    
    BuildPropertyGroup props=project.AddNewPropertyGroup(false);
    props.AddNewProperty("AssemblyName", "myassembly");
    props.AddNewProperty("OutputType", "Library");
    
    BuildItemGroup items=project.AddNewItemGroup();
    items.AddNewItem("Reference", "Some.Assmebly");
    items.AddNewItem("Compile", "somefile.cs");
    
    project.Save("myproject.csproj")
    
  • If your wanting to be able to create the same things over and over again, Visual Studio supports templates. It even has macros that expand when a file is created from the template. For instance, $username$ would be replaced by the name of the logged in user when a file based on the template was created.

    If that's not what you're trying to do, then you might be about to use the Visual Studio extensibility to control the creating of projects.

  • Take a look at the source code of this CodePlex project: STSDev

    It ends up being a WinForm app that dynamically produces a Visual Studio Solution with a Project.

    It is geared for Sharepoint, but the idea it uses is what you are looking for.

    Keith

  • Visual Studio project templates can be created which allow child projects to be created within them. This wont be truely dynamic as you define what child project templates to run during the project creation.

    To be truely dynamic you need to do either of the following:

    • Programatically create the csproj/ vbproj file along with all items you want in there. This would include setting references, etc in the XML

    Or:

    • Create a project template and hock in through the VS project creation mechanisums to call the create project method. This can be very tricky as I found 0 documentation on it and was learning how to do it by reverse engineering from the MS ASP.NET MVC project create wizard DLL (as that will dynaically create a Unit Test project)
  • Hi

    have a look at Codus, the 1.3.2 the source code is downloadable

    Codus

    It creates business objects from the DB, it does it by opening Visual Studio and adding in the files, you can see it happen. (creates the project and files)

    HTH

    bones

Change an image to text via CSS for printing?

Lets say I have a header banner on a webpage I'm about to print. Instead of wasting someone's ink printing the entire block of the image, is there a way via css to replace the image with text of H1 size?

From stackoverflow
  • You could put an h1 element and an image in the same place in the source, and have the image CSS display:none for print media, and have the h1 set to display:none for screen media.

  • Bryan, typically on things like logos I use image replacement for the graphic anyway so the logo itself is really in an H1 tag. Then in my print style sheet. I do something like this...

    h1#logo a, h1#home-logo{
        text-indent: 0 !important;
        background-image: none !important;
        font-size: 1.2em !important;
        display: block !important;
        height: 1em !important;
        width: 100% !important;
        text-decoration: none !important;
        color: black !important;
    }
    

    Which removes the image replacement and shows the text. Make sure of course that you call this stylesheet separately using media="print".

  • I usually just add the following to my style sheet:

    .nodisplay
    {
        display: none;
    }
    
    @media print {
        * {
         background-color: white !important;
         background-image: none !important;
        }
        .noprint
        {
         display: none;
        }
    }
    

    And then assign the noprint class to elements which shouldn't be printed:

    <div class="noprint">
    
    </div>
    

    And for your example, something like the following should work:

    <img src="logo.png" class="noprint" ...>
    <h1 class="nodisplay">Text Logo</h1>
    
  • Adding to Adam's solution: If your text is fixed ("head banner was there" not "ad for such and such was there"), you can use :before or :after pseudo-elements to insert text instead of having the text pre-inserted in the HTML.

    I makes your HTML lighter if you are replacing many images with the same text.

    I have to say that I dislike this CSS feature, but it is there if you want to use it.

  • According to CSS spec this should display the alt attribute after the image. Then you would just have to hide the image but I haven't managed to get it to work right in FF3 or chrome.

    img:after{content: attr(alt);}
    

Serialize ASP.NET Control collection

I've been tasked with converting an existing ASP.NET site from using InProc session management to using the ASP.NET State Server.

Of course what this means is that anything stored in the Session must be serializable.

One of the most complicated pages in the app is currently storing an ASP.NET control collection to the Session. This is failing miserably because the controls cannot be serialized automatically.

Short of totally rewriting how the page works to prevent the need for storing the control collection in the Session, does anyone have a trick/solution for making the collection serializable?

From stackoverflow
  • The first answer that comes to mind is to do a partial rewrite (I don't think there's going to be an easy answer to this). If it's a small number of control types, write your own controls that inherit from those controls and also implement ISerializable. Then, using search and replace, replace the page's controls with your versions. If you are using a large number of control types, you might spend more time extending the standard types than you would refactoring the page.

    The work is going to be in the serialization and deserialization of the controls when you initialize them, to make sure you're capturing what you need (the TextBox values, the IsSelected, etc.).

    This is obviously a hack, but if your priority really is not rewriting the functionality of that particuar page, this might work for you. Then, of course, you need to add this solution to the "technical debt" that your application is accruing, to make sure it's always on someone's radar to refactor at some point.

  • Don't store control collections in session state. Tess has a lot of articles about this, for example this one.

  • Rewrite the page. You'll thank yourself later. There are sure to be other problems if the original "programmer" (and I use that term loosely here) thought it was a good idea to store a control hierarchy in session.

    Tim Cavanaugh : Thanks for the suggestion, I came to this realization as well and spent a large part of last week redo-ing the page so it doesn't need to store the controls in the session at all. The re-write seems to be working well.
    Robert C. Barth : Glad to hear it. Sometimes you just have to bite the bullet and hack through the overgrown underbrush with a giant machete until things look right. :-)

Generating PDF with Quick Reports behind a Delphi Web Server

I have a Delphi web server providing some web services*. One of them is supposed to generate and return a PDF report.

The PDF creation is done with a QReport that is then exported into a PDF file with the ExportToFilter procedure.

The routine works fine when called from within an application, but when called behind a TIdTCPServer, it hangs and never finishes. Debugging it, I got tho the hanging point:

(note: I'm home right now and I don't have the source code. I'll try to reproduce quickrpt.pas' source as accurrate as I can remember).

procedure TCustomReport.ExportToFilter(TQRDocumentFilter filter);
  ...
  AProgress := TQRFormProgress.Create(Application); // Hangs on this line
  AProgress.Owner := QReport;
  if ShowProgress then AProgress.Show;
  QReport.Client := AProgress;
  ...

Searching the web, I found in this page (1) the suggestion to set ShowProgress to False, and edit the code so that it does not create the progress form when ShowProgress is set to false (apparently, this is due to QReport not being threadsafe).

So, I edited the code, and now I have this:

procedure TCustomReport.ExportToFilter(TQRDocumentFilter filter);
  ...
  if ShowProgress then
  begin
    AProgress := TQRFormProgress.Create(Application);
    AProgress.Owner := QReport;
    AProgress.Show;
    QReport.Client := AProgress
  end;
  ...

Now, the report comes out. But then the service gets to an Invalid Pointer Exception (which I can't trace). Following calls to the service complete successfully, but when I shut down the service** it starts whining again with Invalid Pointer Exceptions, then the "MyServer has commited an invalid action and must be closed" windows message, then again a couple of times more, then just the pointer exception, then comes to error 216 (which as far as I could find out, is related to Windows access permissions).

Thanks!

Update (jan 5): Thanks Scott W. for your answer. Indeed, after some research, I found another suggestion that only the main thread can access some components. So I set the QR code back to normal and called the main method from a Synchronize call inside a TThread (so that way the main thread would handle it). But I still get the same error.

You mention you were able to generate PDF as a service with QR 4. Maybe that's why it's not working for me, since I'm using QR 3. On the other hand, you don't mention if you're doing that behind a TIdTCPServer (which is my case, providing web services) or if you run it by itself (for instance, during a batch process).

Anybody knows whether my QR version might be the problem? Thanks!

* Running Delphi 7 and QuickReport 3 on a Windows XP SP2. The server is based on Indy.

** I have two versions of the server: a Windows application and a Windows Service. Both call the same inner logic, and the problem occurs with both versions.

Update (mar 8): After all, my problem was that my printing routine was in another dll, and the default memory management module is somewhat crappy. Setting the first uses of my .dpr to be ShareMem overrides the memory management module with Borland's implementation, and solved my problem.

uses
    ShareMem, ...

(1): http://coding.derkeiler.com/Archive/Delphi/borland.public.delphi.thirdpartytools.general/2006-09/msg00013.html

From stackoverflow
  • I'm guessing that QReport.Client is used somewhere later in the code, and with your modified code no longer assigning it to AProgress, you end up with an error.

    Are you sure that you have to modify the QuickReport source? I have used QuickReport in a Windows Service to generate a PDF file and then attach to email message and all worked fine without having to modify the QR source. I don't recall exactly which settings had to be made, but it was done with Delphi 6 and QR 4.06.

    Pablo : Do you recall if you generate those PDF behind a TIdTCPServer (as in web services or a web server)? Thanks!
    Scott W : No, we don't. The only network component involved in the service is the SMTP connection component.
  • Did you resolve this? I have exactly the same problem with QuickReport and TIdTCPServer, except that I don't have the QR source code.

    Pablo : Yes, I finally solved it, through it came from a completely different approach. My problem was that my printing code was in another dll, and the default memory manager is somewath crappy. Setting the first USES in your dpr to be ShareMem solved my problem.

How can I get CSS comments with javascript?

I am wondering how I can read CSS comments out of a linked stylesheet.

I have this sample CSS loaded via:

<link rel="stylesheet" type="text/css" media="all" href="test.css" />

 

#test1{ border:1px solid #000; }
#test2{ border:1px solid #000; }
#test3{/* sample comment text I'm trying to read */}

I'm testing this in FF3. The following javascript reads the rules but doesn't read the comments in #test3.

window.onload = function(){
    s=document.styleSheets;
    for(i=0;i < s[0].cssRules.length;i++){
     alert(s[0].cssRules[i].cssText);
    }
}
From stackoverflow
  • Comments will almost always be ignored by an interpreter and therefor will not be available.

  • You can't, that's the entire point of comments.

  • You can't read the CSS file JavaScript, just inspect the results in the DOM. One possible way might be to use an embedded stylesheet, where you can query the textual content of the style tag via the DOM interface. You have to parse the content for yourself, of course.

  • You can access the CSS file using an AJAX query and then parse the results yourself looking for comments. The interpreter won't get in the way then.

    As long as the CSS is on the same domain as the page, this will work nicely.

  • You could retrieve the stylesheet's contents and use regex to parse the comments. This example uses jQuery to get the stylesheet text and a regular expression to find the comments:

    jQuery.get("test.css", null, function(data) {
     var comments = data.match(/\/\*.*\*\//g);
     for each (var c in comments) 
      alert(c);
    });
    

    You could also find the stylesheet links using selectors.

    Allain Lalonde : Does foreach work with the space there?
    Cristian Libardo : @allain: I think so. At least it's in the mozzilla docs https://developer.mozilla.org/en/Core_JavaScript_1.5_Reference/Statements/for_each...in
    Cristian Libardo : Oh, it's in new to javascript 1.6 and should probably be avoided for now

Programmatically tell difference between git-svn and git repos?

I've got some shell stuff set up that dynamically defines aliases depending on what sort of VC repo the CWD contains -- so, for example, 'd' runs 'svn diff' or 'git diff', depending. (Based on this blog post, if anybody is interested...)

I'd like to define some aliases differently depending on whether I'm in a git repo versus a git-svn repo. Is there an easy way to tell the difference between the two?

From stackoverflow
  • You can probably use the output of the git config command to differentiate between a git and git-svn repo.

    git config --get svn-remote.svn.url
    

    should return the svn URL being synced to, if there is one.

  • There is also a branch called trunk and and a branch called trunk@[REV]. However, I think codelogics approach is easier and cleaner.

  • You should be a bit careful when deciding which repositories exactly are git-svn repositories. A repository may contain more than one svn repository.

    Kafka's solution will only work if the svn repository was cloned with the -s or --std-layout option, in which there actually is a branch trunk.

    Codelogic's answer will only work if there is an svn repository called svn -- there's no requirement that that is true.

    The easiest way to check if there is an svn-remote in the config is:

    $ git config --get-regexp ^svn-remote
    

    That will find any configured git-svn repository, whatever they're called. It'll exit with status 0 if there is a match, and 1 if there is no match.

    But, this doesn't mean that the svn repository is really used. It might also be that someone has imported an svn repository, but uses is as a submodule or as a sub-tree merge, or even not at all. If metadata in the git-svn repository has been turned on, you can see if any svn revision has been used in the current HEAD by using something like this:

    $ git rev-list -1 --grep='git-svn-id' HEAD
    

    But that's perhaps a bit too convoluted. You decide.

    genehack : thanks - the rev-list bit is too convoluted for my purposes; after poking around a bit more on my own, i'll probably just do 'if [ -d .git/svn ]' and go from there. i realize it won't handle the general case, but it covers all of mine. thanks again.

Generating the partitions of a number

I needed an algorithm to generate all possible partitions of a positive number, and I came up with one (posted as an answer), but it's exponential time.

The algorithm should return all the possible ways a number can be expressed as the sum of positive numbers less than or equal to itself. So for example for the number 5, the result would be:

  • 5
  • 4+1
  • 3+2
  • 3+1+1
  • 2+2+1
  • 2+1+1+1
  • 1+1+1+1+1

So my question is: is there a more efficient algorithm for this?

EDIT: Question was titled "Sum decomposition of a number", since I didn't really know what this was called. ShreevatsaR pointed out that they were called "partitions," so I edited the question title accordingly.

From stackoverflow
  • Here's my solution (exponential time) in Python:

    q = { 1: [[1]] }
    
    def decompose(n):
        try:
            return q[n]
        except:
            pass
    
        result = [[n]]
    
        for i in range(1, n):
            a = n-i
            R = decompose(i)
            for r in R:
                if r[0] <= a:
                    result.append([a] + r)
    
        q[n] = result
        return result
    

     

    >>> decompose(5)
    [[5], [4, 1], [3, 2], [3, 1, 1], [2, 2, 1], [2, 1, 1, 1], [1, 1, 1, 1, 1]]
    
  • It's called Partitions. [Also see Wikipedia: Partition (number theory).]

    The number of partitions p(n) grows exponentially, so anything you do to generate all partitions will necessarily have to take exponential time.

    That said, you can do better than what your code does. See this, or its updated version in Python Algorithms and Data Structures by David Eppstein.

    Can Berk Güder : Oh, thanks. Wish I knew what it was called before. =) It's funny they don't teach this in Number Theory.
    Can Berk Güder : And I should probably edit the question title accordingly.
    : Thanks for the link to David Eppstein's site, just finished an interesting browsing on his site.
  • When you ask to more efficient algorithm, I don't know which to compare. But here is one algorithm written in straight forward way (Erlang):

    -module(partitions).
    
    -export([partitions/1]).
    
    partitions(N) -> partitions(N, N).
    
    partitions(N, Max) when N > 0 ->
        [[X | P]
         || X <- lists:seq(min(N, Max), 1, -1),
            P <- partitions(N - X, X)];
    partitions(0, _) -> [[]];
    partitions(_, _) -> [].
    

    It is exponential in time (same as Can Berk Güder's solution in Python) and linear in stack space. But using same trick, memoization, you can achieve big improvement by save some memory and less exponent. (It's ten times faster for N=50)

    mp(N) ->
        lists:foreach(fun (X) -> put(X, undefined) end,
           lists:seq(1, N)), % clean up process dictionary for sure
        mp(N, N).
    
    mp(N, Max) when N > 0 ->
        case get(N) of
          undefined -> R = mp(N, 1, Max, []), put(N, R), R;
          [[Max | _] | _] = L -> L;
          [[X | _] | _] = L ->
              R = mp(N, X + 1, Max, L), put(N, R), R
        end;
    mp(0, _) -> [[]];
    mp(_, _) -> [].
    
    mp(_, X, Max, R) when X > Max -> R;
    mp(N, X, Max, R) ->
        mp(N, X + 1, Max, prepend(X, mp(N - X, X), R)).
    
    prepend(_, [], R) -> R;
    prepend(X, [H | T], R) -> prepend(X, T, [[X | H] | R]).
    

    Anyway you should benchmark for your language and purposes.

How to draw subsection of text using .net graphics

I'm doing custom drawing in datagridview cells and I have items that can vertically span across multiple cells. An item displays text and the issue at hand is how can I draw just the cell's part of the text? I have the item's rectangle and the cellBounds.

Currently, I am drawing all the text on each cell paint i.e. I'm drawing over cells other than the one I'm currently painting from. This requires me to clear out the previous text (so it doesn't get blurry and bolded)...so I'm actually drawing the string twice per cell paint. Not very efficient.

//get the actual bounds of this  entire item spanning across multiple cells
RectangleF sRectF = GetItemRectF(startX + leftMargin + 2, widthForItem, cellBounds, calItem);

//we clear it out first, otherwise the text looks bolded if we keep drawing a black string over and over
//todo should figure out how to only draw this cells section? cellBounds subsection of sRectF somehow
graphics.DrawString(calItem.Description, new Font("Tahoma", 8), new SolidBrush(itemBackColor), sRectf);
graphics.DrawString(calItem.Description, new Font("Tahoma", 8), new SolidBrush(Color.Black), sRectF);

Could I draw the string on some temp graphics and then snatch out the cell bounds part and draw that on the actual graphics? Is there a better way?

Thanks

Answer

Region tempRegion = graphics.Clip;
graphics.Clip = new Region(cellBounds);
graphics.DrawString(calItem.Description, new Font("Tahoma", 8), new SolidBrush(Color.Black), sRectF);
graphics.Clip = tempRegion;
From stackoverflow
  • I don't think I quite understand the visual effect you intend to have. Is the text for the item supposed to overlap multiple cells or clipped to a single cell? If it's supposed to be clipped to the cell you can set your clipping area using Graphics.Clip to clip to a specified rectangle.

    If the problem is related to smearing due to not clearing the buffer you can use FillRectangle to clear a region cheaper than drawing text.

    dotjoe : Ahh the Clip is what I wanted...thanks.

Learning via books or online tutorials

I see a lot of questions from people asking what books are recommended to learn a certain language or technology but I hardly see anyone asking for online tutorials. Since graduating from college I've only bought one programming book but I've reviewed countless online tutorials and web sites. Do physical books offer anything I can't learn through online tutorials and tech web sites? Thanks.

From stackoverflow
  • Physical books are good for reading when you are away from the computer. Some people like that format as they can use it when taking breaks from computer usage or when doing something where they wouldn't normally have a computer (lunch, bathroom).

    The downside to books is if something is wrong it's up to you to figure it out. On a website it will most likely be pointed out right away if any information is incorrect.

  • what do you want to learn?

    anyway from my experience it is both. tutorials gives you a good place to start but book will cover the issue.

    So start with tutorials and use books and stackoverflow for problems :)

    SquidScareMe : Thanks for the answer. There is nothing in particular I need to learn at the moment. Seeing everyone ask for book recommendations and not online tutorials made me question how I do things.
  • The book authors put several more hours of work than the tutorial ones; whilst it may be possible to an online tutorial to be as polished as a book, in practice the book has undergone several reviews along its drafts and versions (specially the best sold ones that will be recommended. Those are, also, selected from a big book supply).

    On books, typically the author tends to touch more fringe themes than on the tutorial, that tends to go straight to the point, more concisely. Thus, the book may give you the same technical response, buy bundle also more wisdom.

    Books tend to have exercises and be more design to be studied, not read.

    You read typically faster on a book, and the touch feedback is pleasant to most of us.

  • Websites and tutorials are infinitely helpful if you're looking to learn something specific, and fast, but they can't match the depth of books. Books can afford to be longer, and thus can go into much greater depth than an online tutorial.

    Both have their place. If you want to pick up a new language quickly, a tutorial is going to be much more useful than a book. But for broader topics - say, best practices - no tutorial is going to measure up to a book like Code Complete.

  • The best way I learn after you have the essential basics is by reading code. Its alot easier atleast for me instead of reading long articles or a 500 page book.

    David Rodríguez - dribeas : There are many subtleties that you won't get from code. You'll get a solution, but not why other approaches are better/worse than this one. You'll get to know a set of facts without rationales.
  • Nowadays most books can be bought in electronic formats, so IMHO the real difference is the difference between reading online and reading a book.

    When you are reading online, you can use the hyperlinks in the document to quickly navigate to related parts, check related websites, copy-paste code examples.

    Reading a book is more useful for longer theoretical parts (you can't try code examples in your head, of course), where you can take your book and a cup of tee and sit on your best couch and let the material sink in or when you want to return to an argument which you couldn't fully understand. Another advantage of using a book is that you have to type in the code examples -- great way to learn!

  • Tutorials are great for starting points and examples.

    Books are great for more detailed concepts.

    Take the best of both worlds. I go for tutorials when I'm learning something new and need the quick and dirty summary of it (new language, program, API, whatever). Books are great for things like design patterns or more detailed explanations.

  • A trend I've noticed in software design books recently is going through the entire thought process of a particular design, including the mistakes. One example is "Applying Domain-Driven Design and Patterns" by Jimmy Nilsson. This approach is quite nice to talk about pitfalls and blunders that we all make, showing us it's okay to make them, but then pointing out why another approach makes more sense.

    While this is not impossible in tutorials, I don't see a lot of them going out of their way to force you to make mistakes in your design along the way to learning to do it the "right" way.

    The book's a nice format for this type of approach since you a) can see the mistake, and b) not have to waste time coding it up yourself.

  • Personally I find that they both have their place:

    Online Tutorials

    • Great for getting quick help and cut-n-paste code samples.
    • Usually up to date, or edited with links to updated info.
    • Usually cheap/free.
    • Can be hard to read online.
    • Can be missing information, lacking in research, or just plain wrong.
    • Can be hard to find the good ones amongst the dross.

    Hardcopy Books

    • Great for reading offline and engage your brain better than web pages usually do.
    • Usually well researched and edited.
    • Can be expensive.
    • Can be out of date, and you may not realise until you actually try to use it.
    • Are not always accessible.
  • Biggest shortcoming of online tutorials lack of standard and plan on the content. So you can't have website which have proper TOC like a book.

    But learnvisualstudio.com alike video based online websites are great, you got the best of the both worlds. High quality content, a plan and a consistent style + video and interactivity, easy access to code etc.

  • I enjoy using books as reference material. Tutorials are great for learning a small feature, but having a book you've read and know is something that can be exceptionally valuable. Being able to hand a book to a coworker and say read chapter 6 can be more useful than an email full of tutorial links.

    Having said that I have dozens of links to tutorial sites and only keep 2-4 books with me at work at a given time.

    In summary tutorials are great tools (which can sometimes be very specialized) while a book is more like a toolbox. Neither is exclusive of the other and they tend to work well together.

  • Personaly I think I learn better from books then from tutorials. I can get the quick and dirty details of a language/API from a tutorial, but I need a good book to realy understand the topic in detail. I also like the fact that I can go away from my computer and read a book in my best chair and just enjoy the comfort at home.

  • I don't think they offer anything that the internet doesn't offer save for a different perspective on a topic ... oh and structure.

    Regarding structure, unless you think you can organize your own course material for learning an entire language, it's much easier to rely on someone else's already built table of contents to walk you through it.

    They work out very nicely as reference material as well.

Is there anyway to prevent 'this' from changing, when I wrap a function?

I want all buttons to perform an action before and after their normal onclick event. So I came up with the "brilliant" idea of looping through all those elements and creating a wrapper function.

This appeared to work pretty well when I tested it, but when I integrated it into our app, it fell apart. I traced it down to the 'this' value was changed by my wrapper. The sample code illustrates this; before you wrap the event handlers, each button displays the button id when you click, but after wrapping it the displayed name is 'undefined' in this example, or 'Form1' if you run it from within a form.

Does anybody know either a better way to do the same thing? Or a good way to maintain the originally intended 'this' values?

As you can imagine, I don't want to modify any of the existing event handler code in the target buttons.

Thanks in advance.

PS-The target browser is IE6 & up, crossbrowser functionality not required

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" 
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">

<html xmlns="http://www.w3.org/1999/xhtml">
<script language="javascript" type="text/javascript">
    function btnWrap_onClick()
    {
     var btns = document.getElementsByTagName("button");
     for( var i = 0; i < btns.length; i++)
     {
      var btn = btns[i];

      // handle wrap button differerntly
      if( "btnWrap" == btn.id)
      {
       btn.disabled = true;
       continue; // skip this button
      }

      // wrap it
      var originalEventHandler = btn.onclick;
      btn.onclick = function()
      {
          alert("Starting event handler");
          originalEventHandler();
          alert("Finished event handler");
      }
     }

     alert("Buttons wrapped successfully");
    }
</script>
<body>
    <p>
    <button id="TestButton1" onclick="alert(this.id);">TestButton1</button>
    <button id="TestButton2" onclick="alert(this.id);">TestButton2</button>
    </p>
    <button id="btnWrap" onclick="btnWrap_onClick();">Wrap Event Handlers</button>
</body>
</html>
From stackoverflow
  • Your problem is the way closures work in JavaScript. Honestly, I'd recommend using a framework. Any of them should make event-handling far nicer than doing it by hand.

  • You can use the call method to resolve the binding, e.g. originalEventHandler.call(btn);

    Alternatively, a library like prototype can help - its bind method lets you build a new function bound to a specified object, so you'd have declared originalEventHandler as var originalEventHandler = btn.onclick.bind(btn);

    Finally, for a good backgrounder on binding issues, see also Getting Out of Binding Situations in JavaScript

    John MacIntyre : +1 Thanks alot for the hints Paul. Just to let you know, I gave the answer to 'some' because he anticipated and answered some barriers I ran into. Thanks again.
  • Like Paul Dixon said, you could use call but I suggest you use apply instead.

    However, the reason I am answering is that I found a disturbing bug: You are actually replacing all your event handlers with the event handler of the last button. I don't think that was what you intended, was it? (Hint: You are replacing the value for originalEventHandler in each iteration)

    In the code below you find a working cross-browser solution:

    function btnWrap_onClick()
    {
        var btns = document.getElementsByTagName("button");
        for( var i = 0; i < btns.length; i++)
        {
            var btn = btns[i];
    
            // handle wrap button differerntly
            if( "btnWrap" == btn.id)
            {
                btn.disabled = true;
                continue; // skip this button
            }
    
            // wrap it
    
            var newOnClick = function()
            {
                alert("Starting event handler");
                var src=arguments.callee;
                src.original.apply(src.source,arguments);
                alert("Finished event handler");
            }
            newOnClick.original = btn.onclick; // Save original onClick
            newOnClick.source = btn; // Save source for "this"
            btn.onclick = newOnClick; //Assign new handler
        }
    alert("Buttons wrapped successfully");
    }
    

    First I create a new anonymous function and store that in the variable newOnClick. Since a function is an object I can create properties on the function object like any other object. I use this to create the property original that is the original onclick-handler, and source that is the source element that will be the this when the original handler is called.

    Inside the anonymous function I need to get a reference to the function to be able to get the value of the properties original and source. Since the anonymous function don't have a name I use use arguments.callee (that has been supported since MSIE5.5) to get that reference and store it in variable src.

    Then I use the method apply to execute the original onclick handler. apply takes two parameters: the first is going to be the value of this, and the second is an array of arguments. this has to be the element where the original onclick handler was attached to, and that value was saved in source. arguments is an internal property of all functions and hold all the arguments the function was called with (notice that the anonymous function don't have any parameters specified, but if it is called with some parameters anyway, they will be found in the arguments property).

    The reason I use apply is that I can forward all the arguments that the anonymous function was called with, and this makes this function transparent and cross-browser. (Microsoft put the event in window.event but the other browsers supplies it in the first parameter of the handler call)

    some : Now when I look at this again I realize that it should be refactored to use the same anonymous function, and store the value of the old caller on the original object. That would use less memory.
    John MacIntyre : Thanks some, I did notice that bug, and was totally perplexed as to how I was going to reference the real original event hanlder, and calling control. You Rock!
    some : @John MacIntyre: Thank you John! (I'm just curious if thats your real name or if is a reference to Trapper? Maybe both?) Btw, if you are going to use this in production you should consider to refactor it like I said in my comment above.
    John MacIntyre : Real name. I did use it, and it will be rolled into a < 100 user production environment late next week. But I didn't follow what you were talking about with the refactoring that you mentioned. Could you append it to your answer? thx.
    some : @John: Happy New Year! I'm on vacation for a week and don´t have my tools with me. The solution above works, it is clean, crossbrowser and don't interfere with anything else (it even works if you accidentally call it twice). However, it isn't the most memory efficient since it creates one (cont..)
    some : anonymous function for every event handler. I would consider to refactor it to use only one function, and that the necessary data was stored at the element object instead.
    some : Actually, the more I think about it, the solution I have in my answer is a generic one that works for every type of event. As long as you keep the code in the function short it is not a big waste of memory.
    John MacIntyre : Thanks, Some. Happy New Year to you as well. I understand what you are saying. My function is exactly the same as what you have above, with the 2 'alert()'s replaced with my own function calls. Putting it in the button does make sense, but I'm pretty pleased with what we've got now.
    John MacIntyre : May I ask you why don't you use your real name? You're obviously a pretty bright guy, googling you and finding an answer like this would be a pretty good thing I would think. Or are you some famous programmer who doesn't want to be hassled. ;-)
    some : @John: Thank you! Sorry to disappoint you: I'm not a famous programmer. I usually don't use my real name on the net unless it is for official work. Sometimes it is good to be searchable and sometimes it's not and you usually don't know what will bite you until it is too late. I try to prevent (cont)
    some : bad things from happening by using different names on different sites. It's an old habit and has worked so far. :)

How do I convince someone they need to upsize from ms access to sql server or similar

I am having a real problem at work with a highly ingrained developer obsessed with ms access. Users moan about random crashes, locking errors, freeze's, the application slowing down (especially in 2007) but seem to be very resistant to moving it. Most of the time they blame the computer and can't be convinced it's the fact its a mdb sat on a network drive and nothing to do with the hardware sat in front of them which is brand new.

There is a front end vb program hanging off it but I don't think it would take more than a couple of weeks to adjust, infact I would probably re-write it as it has year on year messy code from a previous developer.

What are my best arguments to convince them we need to move it?

Does anyone else have similar problems with developers stuck in their ways?

From stackoverflow
  • Try bechmarking and showing the stats to him

    David-W-Fenton : Benchmarking Jet vs. SQL Server is likely to be a loss for SQL Server, simply because there's a helluva lot of overhead involved in providing SQL Server with the functionality that it has that Jet does not include. It's quite common for an upsized Access/Jet app to run more slowly when upsized.
  • Here is an in depth list of reasons why you might consider moving straight from Microsoft:

    http://msdn.microsoft.com/en-us/library/aa902657(SQL.80).aspx

  • I once had similar problems with someone I would not hesistate to call a complete idiot.

    It was not possible to convince them of the issues with access. In the end it was easier to force the issue than do it "nicely", cruel to be kind.

  • Making people change can sometimes be a real pain in the butt.

    I would have to say the main argument would have stability and speed, but of course like you have said they already know this a still won't move.
    Another thing to try would be to show them the power of LINQ to SQL and how much cleaner it would make your application. Like Daniel Silveira said you could try and throw a couple of stats there way and see if they are convinced.

    We have a app build using MS access as a back end and I can't wait till we get our new SQL server so I can move everything to that.

  • You could show him the perf results comparing the two, but if he's really set in his ways and refuses to change, there isn't much you can do except force him somehow.

    If you're his boss then just force him to change it to use SQL. If not, then convince your boss to force the change by showing him the perf results and explain it'll fix the issues you're having.

  • If they resist then you can always go above their head. Management must be aware of crashes and stability related issues. Present a plan to them to improve stability and they are likely to at least listen. They will probably then want a meeting with all developers to discuss so go into it armed with plenty of ammo.

  • how about the random, crashes, locking errors, freeze's, slow downs (sic).

    A quick search on the web finds some useful materials:

    It's hard to convince people that are not willing to learn and are not open to new ideas. You can go on about speed issues, concurrency issues, security problems.. but ultimately, some people will just never listen. Go over their heads. Rewrite it in tools from this decade and show them up. Refuse to be involved with the project and further. I don't know what the political situation is, but technically, MS access is wrong for what you are doing, from what you've described.

    PeteT : There is not really one answer to this so I gave the answered to the highest rated.
  • come in on a weekend, copy the database to sql server, change the app's connect-strings to sql server, retest the application, then uninstall ms-access...everywhere.

    then don't say anything about it, let him think that the problems 'fixed themselves' and that the users are still using ms-access

  • To me it depends on how many concurrent users you have and how big the database is. If you have more than 5 concurrent users then you should be thinking about a database server. The network traffic starts to get out of hand and with each concurrent user you add it just gets worse.

    I have created reliable access based systems for years. If you are having random crashes, locking issues, and slow downs then you aren't doing something right. I typically will have an mda local with the mdb on the network when creating an app in access. To have good performance it's key to have the proper indexes and queries optimized for getting just the data you need. Whether using a separate app, access, or some app running against sql server you need to actively handle record locking properly. You can't just blindly let access lock your records.

    Remou : I think that even 5 is quite a small number. I, too, have a number of Access databases still in use and seemingly reliable. Access can be very useful for small companies. People to work with SQL can be expensive.
    bruceatk : I'm saying 5 concurrent users. I have had access apps used by over 25 people, but their activity wasn't concurrent.
    PeteT : I have always heard about 5 users too and at peak times we have about 10 however we have some working actually in the db and some working on a front end. I think this one of the issues.
    Remou : The number of concurrent users allowed by Access is 255, however, for the most part, another database should be considered if you need that number of users. Access is better than most people think it is, but it is often very badly set up.
    bruceatk : The 255 is a hard limit not a realistic one. If you really had 255 concurrent users you would all be seriously bottlenecked on the network. 255 database engines all accessing the data on a shared network drive is not a recipe for success. Compare the numbers and 5 is more realistic.
    JohnFx : I have to disagree on the second assertion "how big the database is" this is not a great reason to consider a client server DB over access unless we are talking truly massive systems (in which case SQL might not be great either).
    JohnFx : On the # of users debate, I have managed systems with over 60 active concurrent users that performed well. It just requires a very well designed system. Access gets a bad rap (like VB) because it makes it "too easy" to build a basic working app for someone with poor design skills.
  • Errr, leave the team? You seem to be working with the totally wrong set of people. Now, if the team IS your company, then you are working with the wrong company.

    Of course once you leave the company, you could tell your clients that you could solve the network problems on their own and make them leave the company as well. Then give them an improved system that works on SQL Server Express.

  • The best possible advice I can give you is to make sure that you have a good attitude and are known as someone who does quality work and gets things done. It sounds like you don't have any control in the situation so what you need is influence.

    Find a way to solve a problem (probably a different one that is less threatening to the people involved) in the way you are suggesting. Make it work blindingly fast and flawlessly. Make it work so well that people start asking for you when they need something done. Get it done quickly, which you should be able to do because you'll be using the right tools for the job.

    Be a good person to work with, not the PITA that knows how everyone else should write their code. Be able to give an answer for what you might do differently and why, but don't automatically assume that your ideas are always the best. Maybe there are trade-offs that you don't know about -- no money in the budget for the extra CALs, we have this other app that needs to be done first. This doesn't sound like your situation, but looking for opportunities to understand before making constructive criticisms can go a long way to helping people be receptive.

    The other thing is that this probably has nothing to do with the technical aspects of the situation and everything to do with the insecurities of the other developer. "This is all I know. If we change it, I won't understand it and then where will I be." Look for ways to help the other guy grow -- when he's having a problem, find resources that will help him develop good technical solutions. Suggest that everyone in your department get some training in new technologies. Who knows, one good SQL Server course and the guy could become the SQL Server evangelist in the organization because now THAT'S what he knows.

    Lastly, know when to cut your losses, so to speak. If you find that you're not able to do anything about the situation, don't add to the complaining. Move on to something that you can control and do it as well as you can. Maybe in the future you'll be in a position that you do have control or influence in the situation and can do something about it. If you find that you're in a company that's more dysfunctional than most, find a way to move on to a place where the environment is better.

  • More than "How to convince them", let's talk about "How to do it without anybody noticing"!

    First of all I advice you not to mix together the code optimisation issue and the SQL server one. Do not give users a chance to complain about SQL while bugs are related to something else.

    If your code is really unbearable, rewrite the app before switching to SQL, keeping in mind the following points to make the final transition to SQL Server completely transparent for final users.

    This is what we did 18 months ago, and I am sure we still have users thinking our database is Access:

    1. Export current access database to SQL through available Wizard in access for testing purposes (many problems might occur, and you could need another tool such as the one proposed here).
    2. Create a unique connection object at the application level, so that you can freely switch from Access to SQL at any time (at development level, you can even add an input box at startup to ask which connection to use). We chose an ADODB connection object, but it will also work with ODBC connection.
    3. In case you use SQL syntax to update tables, make sure that all SELECTs, INSERTs, UPDATEs and DELETEs make use of this connection. In case you use recordset, make sure that all of them use this connection at opening time.
    4. When needed, update all connexion specific code by adding a "SELECT CASE" type_Of_TheConnexion options
    5. Switch to SQL connection ..and debug till you're done!

    The problems you will find are mainly linked to SQL syntax, where MSSQL uses ' instead of " and # as separators. Date format is also an issue, where standard SQL format is 'YYYYMMDD' while MS-Access format depends on computer locals (beware of conversions from date to string!) and is stored as "YYYY-MM-DD" (if I remember ...). Boolean in SQL are 0 and 1, while they are True/False or 0/-1 in Access ...

    Test, update code, and when you are ok, make a new data transfer, lock your app on the SQL connection, and distribute a new runtime.

  • Forget the arguments about DB Size, it is an uninformed reason to shift to a client-server platform in 90% of the cases I hear it brought up.

    Your best arguments are based on features explained at a low tech level: (1) You can backup and perform maintenance on the DB without kicking out the users (which introduces costly downtime).

    (2) Faster recovery if data is accidentally deleted/mangled or corrupted. Again, less risk and less downtime. This is always a good foundation for a business case.

    (3) If (and only if) you anticipate the need to scale quite a bit, the upgrade will better allow that.

    (4) If you need to run automated jobs/updates, SQL can do this much more elegantly.

    Remember the contra-indications for SQL, it is easy to get on your technical high-horse about this platform versus that, but you have to balance the benefits against the costs. SQL is a Helluva lot more expensive to maintain as it requires dedicated hardware, expensive licenses (Server OS and DB) and usually at least a part time DBA that is going to cost you a bare minimum of $75K (if you get luck AND work out of Podunk Iowa).

  • It depends on the type of application and data load of your database but Access is quite efficient, even over the network.
    Depending on the amount of data your users deal with you could easily scale up to a 100 users on a network just using a from and back-end Access database.

    Looks like in your case a rewrite may be in order. If your application is data-centric if doesn't make much sense to develop it in VB6: the tools given by Access are much better than anything you'd be able to make, especially when considering Access 2007.

    Upsizing to SQL Server is only really required if you're getting into issues of:

    • Security:
      you need to make sure that only the rights users can access data. You can do your own security in Access, but it's never going to be as strong as SQL Server.
    • Scalability:
      you're dealing with lots of data and complex queries or a lot of users and it would be better to have dedicated hardware to handle the load for the clients. The issue with this though is that while removing the pressure from the less-capable clients machines, you're adding a lot more to the server.
    • Integrity:
      With the back-end database being just a file that needs R/W access for all connected clients, there's always the possibility that someone is going to do something bad or that a client may crash and leave the database corrupted.

    If your number of users is average (I'd say 30), then there's probably no real need to upscale:

    • Use MS Access 2007 to develop your application, then just use the MS Access 2007 Runtime (it's free!) on all client machines to get a more modern user interface (uses the Ribbon and has lots of UI enhancements over previous versions).
      You can't be the cheapness of that solution : you only need full retail version of MS Access and all the rest is free, regardess of the number of users!
    • Don't think that moving to SQL Server is going to improve performance of your queries: MS Access often does a better job of optimizing the queries for you (it knows what needs to be displayed and does lots of caching and optimization).
    • Make sure you only edit small amounts of data at any one time (don't use dynaset queries just to display vast amounts of data in a datasheet; use a snapshot instead and open a detail form that only contains the data to edit when necessary.
    • Cache complex queries locally.
      Built some caching mechanism that leaves a copy of the results of a complex query on the local machine. The gain in performance is pretty amazing and if the query doens't change much (for instance a log of stock operations) you can just persist the complex/big query locally and append new records as necessary.

    There is so much more to say.

    Bottom line is: you may be looking at a rewrite, but don't dismiss Access as the solution because your current application was poorly written.

  • It is possible, and actually fairly easy, to convert an Access database to having the tables/views in SQL Server while still using the Access app as a front-end.

    From there, your Access-obsessed developer can still have fun with all that VBA code. Meanwhile, on the back-end, you add indexes and such to speed everything up. Maybe someday you get lucky, and he asks about stored procedures. Then, the app is just a front-end, and who cares what it's written in? Your data is safe in SQL Server.

    It is possible for you to do this yourself, but just leave the production app ALOOOOOOOOOOOONE. Take a copy, and convert that copy. Then, host it for a couple of users to TEST drive .. make your version of the Access app show "TEST APP" in big red letters. If your developer asks what you're doing, you can say the truth -- you are testing to see if converting only the tables/views might be of some help to the overall app.

    This way, you get the best of both worlds, keep your developer happy, make the users happier (hopefully), and if you play it right, your bosses will know that you handled a knotty personnel issue with your technological prowess and your maturity.