Tuesday, March 1, 2011

Redirect to Error page when user clicks the Browser Refresh button

i need to check whether the user clicking the browser Refresh button and redirect to error page. Can we do this in javascript or any server side methods in ASP.net

From stackoverflow
  • If you give each link you present a unique ID (e.g. a GUID) in the URL as a parameter, then you can keep track of all the requests you've processed. (You could clear out "old" requests if you don't mind the mechanism not working if someone leaves a browser open for a few days and then hitting refresh.) The first time you see a GUID, write it into the table. If you see it again, redirect to an error page.

    It's pretty ugly though, and users could just edit the URL to change the GUID slightly. (You could fix this last flaw by recording the GUID when you generate it, and update the table to indicate when it's been used.)

    In general, users expect to be able to refresh the page though - particularly for GET requests (even though most users wouldn't know what that means). Why do you want to do this?

  • you can use client side hidden variable to store a counter or you can put counter in session. Well I would suggest you to expire the page on refresh there are ways you can achieve this disable cache etc [like all banks website do].

  • You can do that, but I'm sure you shouldn't. The user is in control of the browser, and if she feels like refreshing, it your job to make sure the page refreshes. Returning an error page is the wrong answer.

  • Well, you can use a very famous tecnique called "Syncronizing Token" or something like that =D, mostly used to send forms.

    This will work like this:

    1. Create a function to provide a pseudo-random string token.

    2. For every request to you page, check if a variable in Session, ex: Session["synctoken"] if present. If no, then it is the first time, generate a token and store it there.

    3. Every link request, ex: "mypage.aspx" put a get called synctoken with another token, diferent from the one you have stored in the Session, it goes like "mypage.aspx?synctoken=2iO02-3S23d".

    4. Then, comming back to (2), in a request, if a token is present in Session check if the GET is present (Request.QueryString["synctoken"] != null). If no, send Error. If yes check whether the Tokens (Session and GET) are different. If they are different, it is ok, store the GET into your Session (Session["synctoken"] = Request.QueryString["synctoken"]) and go to step (2). If no, then the user refreshed the page, there goes your error.

    It goes like:

    if (Session["synctoken"] != null) {
        if (Request.QueryString["synctoken"] != null) {
            if (Request.QueryString["synctoken"].ToString().Equals(Session["synctoken"].ToString())) {
                // Refresh! Goto Error!
                MyUtil.GotoError();
            }
            else {
                // It is ok, store the token and go on!
                Session["synctoken"] = Request.QueryString["synctoken"];
            }
        }
        else {
            MyUtil.GotoErrorPage();
        }
    }
    else {
        Session["synctoken"] = MyUtil.GenerateToken();
    }
    

    Sorry if I could not be more clear.. good luck!

    José Leal : Ok, got a minus but I don't even know why! Sorry to help.

Java Command Line Trouble with Reading a Class from a Jar Archive

I am trying to run a java based tool using a command line syntax as the following: java -cp archive.jar archiveFolder.theMainClassName.Although the class I am searching for, a main class, "theMainClassName" is in the archive.jar and in the archiveFolder given at input, I keep getting the error that my class is not seen. Does anybody have any ideas concerning this problem? Thank you in advance

From stackoverflow
  • Perhaps with java -jar archive.jar?

    Of course, it supposes the manifest points to the right class...

    You should give the exact message you got, it might shed more light.

    EDIT: See Working with Manifest Files: The Basics for information on setting the application entry point (Main class) in your jar manifest file.

    Jason Coco : This doesn't actually answer the question... he's not attempting to run the jar but a specific class in the jar. It's almost certainly a packaging problem.
    Bill the Lizard : I agree that this doesn't answer the specific question asked, but other people searching SO might be suffering from a problem with their manifest. They'll run across this question, so I think it's worth having this answer here.
    Jason Coco : @Bill this is true
  • Does theMainClassName class have the following package line at the top:

    package archiveFolder
    

    You need the class file to be in the same directory structure as the declared package. So if you had something like:

    org/jc/tests/TestClass.class
    

    its source file would have to look like this:

    package org.jc.tests;
    
    public class TestClass {
      public static void main(String[] args) {
        System.out.printf("This is a test class!\n");
      }
    }
    

    Then you could use the following to create the jar file and run it from the command line (assuming the current directory is at the top level, just above org):

    $ jar -cf testJar.jar org/jc/tests/*.class
    $ java -cp testJar.jar org.jc.tests.TestClass
    
  • Here's a concrete example of what does work, so you can compare your own situation.

    Take this code and put it anywhere, in a file called MainClass.java. (I've assumed a directory called src later. Normally you'd arrange the source to match the package, of course.)

    package archiveFolder;
    
    public class MainClass
    {
        public static void main(String[] args)
        {
            System.out.println("I'm MainClass");
        }
    }
    

    Then run each of these commands:

    # Compile the source
    javac -d . src/MainClass.java
    
    # Build the jar file
    jar cf archive.jar archiveFolder
    
    # Remove the unpackaged binary, to prove it's not being used
    rm -rf archiveFolder # Or rmdir /s /q archiveFolder on Windows
    
    # Execute the class
    java -cp archive.jar achiveFolder.MainClass
    

    The result:

    I'm MainClass
    

    How are you building your jar file? Is the code in the appropriate package?

  • Usually this happens when a dependent class (static member) is not found - like this, using log4j:

    public class MyClass {
      private static Logger log = Logger.getLogger("com.example");
    }
    

    The reason is that the initialization of such a static member can be understood as part of the class loading - errors causing the class not to be available (loadable), resulting in the error you described.

    Static constructors are another possible reason:

    public class MyClass {
      static {
         // <b>any</b> error caused here will cause the class to 
         // not be loaded. Demonstrating with stupid typecast.
         Object o = new String();
         Integer i = (Integer) o;
      }
    }
    
  • I think others have covered some common stuff here. I'd jar tf the jar and make sure the class is listed. I'd also double-check that the class is public and the method is "public static void main(String[] arg)".

How do I send a patch to another developer and avoid merge conflicts?

How do I get a patch from a commit in order to send it to another developer? And how do I best avoid a merge conflict with this patch when merging our trees at a later date?

If you know how please explain how to do this in your VCS of choice such as subversion, git, Mercurial, bzr or etc.

From stackoverflow
  • In SVN you can simply make your changes then before commiting, pipe the output of the svn diff to a file as such

    svn diff > mypatch.diff
    

    you can then revert your changes and apply the patch at a later date using

    patch -p0 -i mypatch.diff
    

    As always don't blindly apply patches to your code and always inspect them first.

    You may also find that the patch will break your source code if the source files have changed significantly enough since the patch was taken.

    You also can not guarantee that there will not be merge conflicts when you attempt to check in the code.

  • In git you can pipe the output of git-diff between two commits like this:

    git diff fa1afe1 deadbeef > patch.diff
    

    Send the patch.diff to the developer and let him git-apply it to his workspace like this:

    git apply patch.diff
    

    If the other developer already has the commits available in his repository he could always pipe it in himself without merging like this:

    git apply < git diff fa1afe1 deadbeef
    

    You can then add and commit the changes in the diff the usual way.


    Now here comes the interesting part when you have to merge the patch back to the master branch (that is public). Consider the following revision tree where C* is the applied patch from C in the master branch:

    A---B---C---D          master, public/master
         \
          E---C*---F       feature_foo
    

    You can use git-rebase to update the topic branch (in this example named feature_foo) with it's upstream head. What that means is when you type in the following:

    git rebase master feature_foo
    

    Git will rearrange the revision tree like this and will also apply the patch itself:

    A---B---C---D          master, public/master
                 \
                  E*---F*  feature_foo
    

    Merging to the upstream branch will now be an easy fast-forward merge. Also check that the new commits E* and F* work as the previous E and F respectively.

    You can do the same thing against another developer's branch using the same steps but instead of doing it on a public repo, you'll be fetching revisions from the developer's repository. This way you won't have to ask the other developer for a patch if it is already available from what he published at his repo.

    Please note to never rebase a public branch because the command will rewrite git history which is something you don't want to do on branches that people depend on and will create a mess when merging to remote repositories. Also never forget to integrate often so others in your team can take part of your changes.

    Spoike : Found out afterwards that you can do the same thing with git format-patch to format a patch and git am to apply and commit the patch. Example: git format-patch -k --stdout R1...R2 | git am -3 -k
  • Bzr handles sending a "merge directive", meaning it sends the patch for you so that the other party can simply click "OK" to merge and there's less futzing around with patch/apply etc.

    just: $ bzr send -o mycode.patch

    Spoike : bzr send only creates a merge directive between two different branches. I was looking for how to create patches like single commits or cherry-picking, and how merging works when applying those patches.
  • In Subversion there is no nice way of doing this. Yes, you can use svn diff + patch but this will only postpone your problems until you are going to merge and by then chances are that you've forgotten about it.

    The way you would do it in Subversion would be to create a branch, do the commit on the branch and ask the recipient of the patch to switch to the branch. Then you can merge the branch back to trunk in the usual way.

Reflection - Getting the generic parameters from a System.Type instance

If I have the following code:

MyType<int> anInstance = new MyType<int>();
Type type = anInstance.GetType();

How can I find out which type parameter(s) "anInstance" was instantiated with, by looking at the type variable ? Is it possible ?

From stackoverflow
  • Use Type.GetGenericArguments. For example:

    using System;
    using System.Collections.Generic;
    
    public class Test
    {
        static void Main()
        {
            var dict = new Dictionary<string, int>();
    
            Type type = dict.GetType();
            Console.WriteLine("Type arguments:");
            foreach (Type arg in type.GetGenericArguments())
            {
                Console.WriteLine("  {0}", arg);
            }
        }
    }
    

    Output:

    Type arguments:
      System.String
      System.Int32
    
  • Use Type.GetGenericArguments(). For example:

    using System;
    using System.Reflection;
    
    namespace ConsoleApplication1 {
      class Program {
        static void Main(string[] args) {
          MyType<int> anInstance = new MyType<int>();
          Type type = anInstance.GetType();
          foreach (Type t in type.GetGenericArguments())
            Console.WriteLine(t.Name);
          Console.ReadLine();
        }
      }
      public class MyType<T> { }
    }
    

    Output: Int32

How to create a dashboard user interface using ASP.NET MVC?

I am currently building an application using ASP.NET MVC. The data entry pages are fairly easy to code, I just make the Model for the page of the type of my business object:

namespace MyNameSpace.Web.Views.ProjectEdit
{
    public partial class MyView : ViewPage<Project>
    {
    }
}

Where I am struggling is figuring out the best way to implement a dashboard like interface, with stand-alone parts, using ASP.NET MVC, where the Model for each part would be different? I'm assuming that each part would be an MVC user control.

Also, how could I make it so each part is testable?

From stackoverflow
  • Hi there,

    I'm not familiar to MVC yet (I tries once in the beginning of the betas, but I thought it was "scary" omparing to webforms) so I'm shooting blanks here, but you might download the Storefont MVC and learn from it, or see the movies under http://www.asp.net/learn/mvc-videos/

    I think that will help you accomplish what you need and have a better understanding on the MVC framework

    mattruma : I will check these out ... I've watched some of the MVC videos, and while informative ... they usually stick to the kool-aid, instead of providing some meat.
    Adam Nofsinger : http://tekpub.com/production/aspmvc <-- Here's some meat for you, no affiliation. :)
  • I think that user controls is probably the way to go. I'm not sure what the concern is about testability. You should be able to test that your controller is providing the right view data -- since you'll have several models each of these will probably be stored in a separate view data item, rather than aggregating them in a single model. Aggregating in a single model is also possible, although probably more brittle. Each control would just need to check for a particular view data item, rather than being specific to a particular model. You could approximate the model variable on each view page by doing:

    <% MyUserControlModel model = ViewData["MyUserControlModel"]
             as MyUserControlModel; %>
    
    <div id="myUserControl_dashboard" class="dashboard">
       Name: <%= model.Name %><br />
       Count: <%$ model.Count %>
    </div>
    

    If you need to test your view, then you're probably already using Selenium or some other web testing framework. I don't think that these would care how the page was constructed and you should be able to construct your tests pretty much like you always do.

    mattruma : So let's say I have a HomeController ... and on the Index view I want to display X other user controls ... how would I do this?
    tvanfosson : Create your DashboardView that renders each of the UserControls in the view placed as you want them. Have your Index action collect the models needed for each of the user controls and store each model in an appropriately named ViewData item. Each control need only render its data.
    tvanfosson : (cont.) The Index action returns a ViewResult for the DashboardView with appropriate data for each of the user controls.
  • Check out the notion is sub-controllers in MVC-Contrib http://www.codeplex.com/MVCContrib. Basically you run a full request to a partial then display that partial where you want in your existing code.

    Alternatively you can check out this post: http://blog.codeville.net/2008/10/14/partial-requests-in-aspnet-mvc/

  • Hi,

    You can try Kalitte .Net Dashboard Toolkit. It lets you to create partial rendering iGoogle like dashboards in your Asp.Net application.

    Visit http://www.dynamicdashboards.net for more info.

PHP MYSQL - Explode help

Greetings,

I have data stored on mysql with delimiter "," in 1 table. I have rows and column stored on database too. Now i have to output the data using rows and column number stored on database to draw the table.

Rows and column number are user input, so it may varies.

Let say, there is number 3 on column and 3 on rows.

I need to do it like display like,

|___d1__|___d2__|___d3__|
|___d4__|___d5__|___d6__|
|___d7__|___d8__|___d9__|

Where d1-d9 would be the data stored on mysql database with delimiter "," in one table.

Thanks for helping me.

From stackoverflow
  • assuming that user set table size for 2 rows and 3 columns and makes some input fot 6 cells, data which will go to database will be

    2,3,d1,d2,d3,d4,d5,d6

    when you will fetch data from cell and make explode on fetched string you will get 1 dimension array with 8 elements

    $r = $e[0] rows

    $c = $e[1] cols

    $e[2-7] data

    • wrtite openning < table > tag
    • two loops, one in another,
    • first one will generate code for start of row
    • wrtite openning < tr > tag
    • inside one will genereate code for row.
    • write opening < td > tag
    • write data $e[1 + position calulated from inside and outside loops]
    • write closing < td > tag
    • end of inside loop
    • wrtite closing < tr > tag
    • end of outside loop
    • wrtite closing < table > tag

    It should give you the idea

  • Rows number are stored on column name "rows", Columns number are stored on column name "columns".

    Other data are stored on column name "data".

    I'm begineer in PHP, I don't quite get what you mean? Mind explaining more to me?

    Please and Thanks.

    Tomalak : Please use the comment system for this. Using answers to comment on other answers is the wrong approach.
  • This won't help you solve this very problem, but a word of good advice: never EVER write comma seperated values into a database field. You can't sensibly query information stored like this, and your application code will be cluttered with ugly conversions. Instead, use a seperate table with a reference to the main table and one row per value.

    Vinko Vrsalovic : It does help him indeed, he can instead write a procedure to fix the tables and then write sensible queries instead of trying to go this way
  • You can turn the comma separated values from your data column into an array using the explode() function:

    <?php
      $result = mysql_query('SELECT rows, columns, data from table_name where id=1');
      $record = mysql_fetch_assoc($result);
    
      $rows = $record['rows'];
      $columns = $record['columns'];
    
      $data = explode(',' , $record['data']);
    
      if (sizeof($data) != $rows * $columns) die('invalid data');
    ?>
    

    To display the table, you need two nested for-loops:

    <table>
    <?php for ($row = 0; $row < $rows; $row++) : ?>
        <tr>
        <?php for ($column = 0; $column < $columns; $column++) : ?>
         <td>
          <?php echo $data[$row * $columns + $column]; ?>
         </td>
        <?php endfor ?>
        </tr>
    <?php endfor ?>
    </table>
    
    Tomalak : Hats off for understanding the question, I guess this was the most difficult part. :-)
  • Thank you very much foxy!

    Tomalak : Please use the comments for this, don't use answers.
    CesarB : @Tomalak: commenting needs 50 reputation (he has 19 as of this writing). He should instead edit his question.
  • How can i explode field value of table in select query ?

    for e.g. i have 1 field in table named "coordinates" which contains latitude , longitude.

    Now i want to use this latitude and longitude in select query.

    Can i seperate this values and use it in select query ?

LINQ to XML: parsing XML file which one of nodes presents type of another node

Helo!

Is this possible to use string value of one node which tells what type of field is presented in another node using LINQ to XML?

For example:

<node>
  <name>nodeName</name>
  <type>string</type>
</node>
<node>
  <name>0</name>
  <type>bool</type>
</node>
<node>
  <name>42</name>
  <type>int</type>
</node>

Thanks in advance

From stackoverflow
  • Well, you won't get a nice statically typed API given that the type information is only known at execution time - but you could easily write an extension method on XElement which looks for the appropriate subelements and returns System.Object. For instance (untested):

    public static object ParseValue(this XElement element)
    {
        XElement name = element.Element("name");
        XElement type = element.Element("type");
        // Insert error handling here :)
    
        switch (type.Value)
        {
            case "int":
                return int.Parse(name.Value);
            case "string":
                return name.Value;
            case "bool":
                return name.Value == "1"; // Or whatever
            default:
                throw new ArgumentException("Unknown element type " + type.Value);
        }
    }
    

    It's not how I'd design a data format, but if it's being thrust upon you...

  • public static void Main() {
     var xmlNodes = new XElement( "Nodes",
      new XElement( "Node",
       new XElement( "Name", "nodeName" ),
       new XElement( "Type", "string" )
      ),
      new XElement( "Node",
       new XElement( "Name", "True" ),
       new XElement( "Type", "bool" )
      ),
      new XElement( "Node",
       new XElement( "Name", "42" ),
       new XElement( "Type", "int" )
      )
     );
    
     var converters = new Dictionary<string,Func<string,object> >  {
      { "string", val => val },
      { "bool", val => Boolean.Parse( val ) },
      { "int", val => Int32.Parse( val ) }
     };
    
     var values = 
      from node in xmlNodes.Elements( "Node" )
      select converters[ node.Element( "Type" ).Value ]( node.Element( "Name" ).Value );
    
     foreach( var value in values )
      Console.WriteLine( value.GetType().ToString() + ": " + value );
    }
    

defining new tooltips in emacs

Hello, I would like to add custom tooltips to emacs. More specifically, whenever I hover on a symbol (function/variable) name with my mouse of I would like to see a tooltip with the symbol's definition. I know that I can find this kind of info with a tool like cscope but I have no idea how to attach the output of cscope to a tooltip. does anyone have a partial (how to link a callback to a tooltip in emacs in general) or a full (how do I actually link the output of cscope to a tooltip) solution to this?

Thanks, Nir

From stackoverflow
  • Your Emacs installation should include the Elisp reference manual (if not, download it now - you're going to need it when developing your mode). To access it, go to Info (C-h i) and look for a node labeled "Elisp", sometimes in a separate "Emacs" menu. Type i for index and tooltip to look for information on tooltips. You should find node 32.19.4 Properties with Special Meanings, which tells you that the content of the help-echo property is a string that is the tooltip content, or a function that can construct the tooltip dynamically. Explore the manual around that node to find out more about text properties and how to set them.

    Here's a simple example:

    (insert (propertize "foo\n" 'help-echo "Tooltip!"))
    

    Type this into your *scratch* buffer and press C-j to run the code. Then point your mouse at the word "foo" and you should see the tooltip.

    sverrejoh : Can you explain the advantages/disadvantages in using text properties instead of overlays?
  • There is an AutoOverlay package that can automatically add overlays, and tooltips associated with those overlays, based on a regex match of the buffer text.

Free space in a CMD shell

Is there a way to get the amount of free diskspace of a disk or a folder in a CMD without having to install some thirdparty applications?

I have a CMD that copies a big file to a given directory and could of course use the errorlevel return from the copy command, but then I have to wait for the time it takes to copy the file (eg...to that then the disk is full and the copy operation fails).

I would like to know before I start the copy if it is any idea at all. Tried the DU.EXE utility from Sysinternals, but that show occupied space only.

From stackoverflow
  • If you run "dir c:\", the last line will give you the free disk space.

    Edit: Better solution: "fsutil volume diskfree c:"

    VonC : Yes but you need administrator privilege to use that command...
  • A possible solution:

    dir|find "bytes free"
    

    a more "advanced solution", for Windows Xp and beyond:

    wmic /node:%COMPUTERNAME% LogicalDisk Where DriveType="3" Get DeviceID,FreeSpace|find /I "c:"
    

    The Windows Management Instrumentation Command-line (WMIC) tool (Wmic.exe) can gather vast amounts of information about about a Windows Server 2003 as well as Windows XP or Vista. The tool accesses the underlying hardware by using Windows Management Instrumentation (WMI). Not for Windows 2000.

    Joey : +1 for WMI. Should be the only stable solution. Relying on a specific language (for find) is probably a bad idea :)
  • Is cscript a 3rd party app? I suggest trying Microsoft Scripting, where you can use a programming language (JScript, VBS) to check on things like List Available Disk Space.

    The scripting infrastructure is present on all current Windows versions (including 2008).

  • df.exe

    Shows all your disks; total, used and free capacity. You can alter the output by various command-line options.

    You can get it from http://www.paulsadowski.com/WSH/cmdprogs.htm, http://unxutils.sourceforge.net/ or somewhere slse. It's a standard unix-util like du.

  • Thank you all for taking the time to answer. I now have a couple of solutions that I have to deal with. Thanks // Peter

How do I use std::tr1::mem_fun in Visual Studio 2008 SP1?

The VS2008 SP1 documentation talks about std::tr1::mem_fun.

So why, when I try and use std::tr1::mem_fun, why do I get this compile error?:

'mem_fun' : is not a member of 'std::tr1'

At the same time, I can use std::tr1::function without problems.

Here is the sample code I am trying to compile, which is supposed to call TakesInt on an instance of Test, via a function<void (int)>:

#include "stdafx.h"
#include <iostream>
#include <functional>
#include <memory>

struct Test { void TakesInt(int i) { std::cout << i; } };

void _tmain() 
{
    Test* t = new Test();

    //error C2039: 'mem_fun' : is not a member of 'std::tr1'
    std::tr1::function<void (int)> f =
        std::tr1::bind(std::tr1::mem_fun(&Test::TakesInt), t);
    f(2);
}

I'm trying to use the tr1 version of mem_fun, because when using std::mem_fun my code doesn't compile either! I can't tell from the compiler error whether the problem is with my code or whether it would be fixed by using tr1's mem_fun. That's C++ compiler errors for you (or maybe it's just me!).


Update: Right. The answer is to spell it correctly as mem_fn!

However when I fix that, the code still doesn't compile.

Here's the compiler error:

error C2562: 
'std::tr1::_Callable_obj<_Ty,_Indirect>::_ApplyX' :
  'void' function returning a value
From stackoverflow
  • I am no expert on either TR1 or VS2008, but a quick googling suggests that the function you're looking for is std::tr1::mem_fn instead. (At least, that's what Boost calls it in their TR1 implementation, and that's how it's detailed on Wikipedia.)

    I'm not sure why you're getting a compile error with the old version of mem_fun though. If you post the compiler's message about that, it might help us figure it out.

  • To use mem_fun like that you need to fully specify all the template arguments (as mem_fun is a class and automatic template parameter deduction is not done on classes). Also mem_fun only has a default constructor that takes 0 arguments.

    Not having the full class definition it is hard to get correct.
    But my best bet at what you wanted would be this: (or something close)

     std::tr1::mem_fun<Test,void (Test::*)(Test*),&Test::TakesInt>()
    

    What I think you are looking for is mem_fn(). This is a function that returns an object of type mem_fun. Because it is a function automatic template parameter deduction is done.

      std::tr1::mem_fn(&Test::TakesInt)
    

    To solve the second problem use: std::bind1st()

      f=    std::bind1st(std::tr1::mem_fn(&Test::TakesInt), t);
    
    mackenir : struct Test is fully defined in the sample code. I've updated the question with the compiler error I get when I use mem_fn.
  • Change it to this:

    std::tr1::function<void (int)> f =
        std::tr1::bind(std::tr1::mem_fn(&Test::TakesInt), t, std::tr1::placeholders::_1);
    f(2);
    

    The binder requires the int argument. So you have to give it a placeholder which stands for the integer argument that the generated function object needs.

    Btw: I'm not sure whether you already know this or not. But you don't need that mem_fn for this. Just change it to

    std::tr1::function<void (int)> f =
        std::tr1::bind(&Test::TakesInt, t, std::tr1::placeholders::_1);
    f(2);
    
    mackenir : Thanks! I had just worked out where _1 came from and was coming back to update my question with the fix.
    mackenir : Okay, all answers got me nearer to a solution, so I had trouble picking which one to accept. But unfortunately I can only accept one. I think this one shows the cleanest way to implement what I want, using the tr1 additions, but I up-voted all others. Thanks all.

Is there a fast way to transfer all the variables of one identical object into another in C#?

This is probably a simple question. Suppose I have a object called Users and it contains a lot of protected variables.

Inside that Users class I have a method that creates a temporary Users object, does something with it, and if successful, transfers all the variables from the temp Users object into the one I have.

Is there some fast way to transfer all the variables from one Users object into another Users object without doing this using C#?

this.FirstName = temp.FirstName;
this.LastName = temp.LastName;
........75 variables later......
this.FavoriteColor = temp.FavoriteColor
From stackoverflow
  • A better approach is to implement the IClonable interface. But you'll find it doesn't save you a lot of work.

  • You should check out cloning in C#.

    http://stackoverflow.com/questions/78536/cloning-objects-in-c

  • A better solution might be to move whatever this method is outside of your class, and then just assign the temp user object to your main user object reference like so:

    _User = tmpUser;
    

    sparing you the 75 lines of code. Whenever I have a class creating an instance of itself inside one of its own methods, I always like to blink a couple of times and make sure I really need to be doing that.

    danmine : I didn't know it was bad practice. How about if I call a DataReader with a GetValue()?
    Jon Skeet : @MusiGenesis: Factory methods are pretty standard though, and have many advantages over public constructors.
    MusiGenesis : @Jon: does that sound like what's going on here?
    MusiGenesis : @Danmine: can you post more of your method? I'd be interested in knowing what you're trying to do there.
  • I think serializing and then deserializing an object will create a new object instance. This should be identical to the former object.

SharpSvn question

I was hoping to automate some tasks related to SubVersion, so I got SharpSvn. Unfortunately I cant find much documentation for it.

I want to be able to view the changes after a user commits a new revision so I can parse the code for special comments that can then be uploaded into my ticket system.

Can anyone help or point me in the right direction?

thanks

From stackoverflow
  • Is this of any use?

    http://blogs.open.collab.net/svn/2008/04/sharpsvn-brings.html

  • I wonder whether subversion hooks (at the svn server) might not be another approach here? I have not tried it, but CaptainHook appears to offer svn->.NET hook integration.

  • If you just want to browse SharpSvn you can use http://docs.sharpsvn.net/. The documentation there is far from complete as the focus is primarily on providing features. Any help on enhancing the documentation (or SharpSvn itself) is welcome ;-)

    To use log messages for your issue tracker you can use two routes:

    1. A post-commit hook that processes changes one at a time
    2. A scheduled service that calls 'svn log -r <last-retrieved>:HEAD' every once in a while.

    The last daily builds of SharpSvn have some support for commit hooks, but that part is not really api-stable yet.

    You could create a post commit hook (post-commit.exe) with:

    static void Main(string[] args)
    {
      SvnHookArguments ha;
      if (!SvnHookArguments.ParseHookArguments(args, SvnHookType.PostCommit, false, out ha))
      {
        Console.Error.WriteLine("Invalid arguments");
        Environment.Exit(1);
      }
    
      using (SvnLookClient cl = new SvnLookClient())
      {
        SvnChangeInfoEventArgs ci;
        cl.GetChangeInfo(ha.LookOrigin, out ci);
    
        // ci contains information on the commit e.g.
        Console.WriteLine(ci.LogMessage); // Has log message
    
        foreach(SvnChangeItem i in ci.ChangedPaths)
        {
           //
        }
      }
    }
    

    (For a complete solution you would also have to hook the post-revprop-change, as your users might change the log message after the first commit)

é is not correctly parsed

My application will read xml from urlconnection. The xml encoding is ISO-8859-1, it contains é character. I use xerces saxparser to parse received xml content. However, é can not be parsed correctly while running application under lunix OS. Everything works fine in Windows. Could you guys please give me some hints? Thanks a lot

From stackoverflow
  • I bet this is related to file.encoding. Try running with -Dfile.encoding=iso-8859-1 as a VM parameter on linux.

    If this works, you probably need to specify the correct format when opening the stream (somewhere in your code).

  • This is probably a case of a file marked as "ISO-8859-1" when it in reality is in another encoding.

    Often this happens with "ISO-8859-1" and "Windows-2152": They are being used as if they were interchangeable, but they are not. (In the comments to this answer it has been clarified that both encodings agree on a character code for "é", so Windows-1252 is probably not it.)

    You can use a Hex editor to find out the exact char code of the "é" in your file. You can take that value as a hint to what encoding the file is in. If you have control over how the file is produced, a look at the responsible is code/method is also advisable.

    Jon Skeet : I agree with the statements about them often being confused, and them actually being different - but the e-acute is in ISO-8859-1 at U+00E9, so I suspect it's not the problem in this particular case.
    Tomalak : Then maybe the file has been saved in *yet another* encoding.
  • The first thing you should do is determining the real encoding of the xml file, as Tomalak suggests, not the encoding stated in header.

    You can start by opening it with Internet Explorer. If encoding is not correct you may see an error like this:

    An invalid character was found in text content. Error processing resource ...

    Or the following one:

    Switch from current encoding to specified encoding not supported. Error processing resource ...

    Using a text editor with several encodings support is the next step. You can use Notepad++ that is free, easy to use and supports several encodings. No matter what xml header says about encoding, the editor tries to detect encoding of the file and displays it on status bar.

    If you determine that the file encoding is correct then you may be not handling correctly the encoding inside Java. Take into account that Java strings are UTF-16 and by default when converting from/to byte arrays, if no encoding is specified Java defaults to system encoding (Windows-1521 under Windows or UTF-8 on modern Linuxes). Some encoding conversions only cause "strange" characters to appear, such as conversions between fixed 8 bit encodings (ie Windows-1252 <-> ISO-8859-1). Other conversions raise enconding exceptions because of invalid characters (try importing Windows-1252 text as UTF-8 for example).

    An example of invalid code is the following:

    // Parse the input
    SAXParser saxParser = factory.newSAXParser();
    InputStream is = new ByteArrayInputStream(stringToParse.getBytes());
    saxParser.parse( is, handler );
    

    The conversion stringToParse.getBytes() returns by default the string encoded as Windows-1252 on Windows platforms. If the XML text was encoded in ISO-8859-1 at this step you have wrong characters. The correct step should be reading XML as bytes and not a String and let SAX manage xml encoding.

  • If the XML declaration doesn't specify an encoding, the sax parser will try to use the default encoding, UTF-8.

    If you know the character encoding but it isn't specified in the XML declaration, you can tell the parser to use that encoding with an InputSource:

    InputSource inputSource = new InputSource(xmlInputStream);
    inputSource.setEncoding("ISO-8859-1");
    
    erickson : To be more precise: it *must* use UTF-8 if the encoding is not specified in the XML declaration.
    Sophie Tatham : Thanks - I thought so but wasn't certain.
  • Sorry for my late reply. We solved the problem. We did some wrong operation on the input stream (just as what Fernando Miguélez said, conversion caused problem).

    Thanks for all of you guys' help.

Need advice on combining ORM and SQL with legacy system

We are in the process of porting a legacy system to .NET, both to clean up architecture but also to take advantage of lots of new possibilities that just aren't easily done in the legacy system.

Note: When reading my post before submitting it I notice that I may have described things a bit too fast in places, ie. glossed over details. If there is anything that is unclear, leave a comment (not an answer) and I'll augment as much as possible

The legacy system uses a database, and 100% custom written SQL all over the place. This has lead to wide tables (ie. many columns), since code that needs data only retrieves what is necessary for the job.

As part of the port, we introduced an ORM layer which we can use, in addition to custom SQL. The ORM we chose is DevExpress XPO, and one of the features of this has also lead to some problems for us, namely that when we define a ORM class for, say, the Employee table, we have to add properties for all the columns, otherwise it won't retrieve them for us.

This also means that when we retrieve an Employee, we get all the columns, even if we only need a few.

One nice thing about having the ORM is that we can put some property-related logic into the same classes, without having to duplicate it all over the place. For instance, the simple expression to combine first, middle and last name into a "display name" can be put down there, as an example.

However, if we write SQL code somewhere, either in a DAL-like construct or, well, wherever, we need to duplicate this expression. This feels wrong and looks like a recipe for bugs and maintenance nightmare.

However, since we have two choices:

  • ORM, fetches everything, can have logic written once
  • SQL, fetches what we need, need to duplicate logic

Then we came up with an alternative. Since the ORM objects are code-generated from a dictionary, we decided to generate a set of dumb classes as well. These will have the same number of properties, but won't be tied to the ORM in the same manner. Additionally we added interfaces for all of the objects, also generated, and made both the ORM and the dum objects implement this interface.

This allowed us to move some of this logic out into extension methods tied to the interface. Since the dumb objects carry enough information for us to plug them into our SQL-classes and instead of getting a DataTable back, we can get a List back, with logic available, this looks to be working.

However, this has lead to another issue. If I want to write a piece of code that only displays or processes employees in the context that I need to know who they are (ie. their identifier in the system), as well as their name (first, middle and last), if I use this dumb object, I have no guarantee by the compiler that the code that calls me is really providing all this stuff.

One solution is for us to make the object know which properties have been assigned values, and an attempt to read an unassigned property crashes with an exception. This gives us an opportunity at runtime to catch contract breaches where code is not passing along enough information.

This also looks clunky to us.

So basically what I want advice on is if anyone else has been in, or are in, this situation and any tips or advice you can give.

We can not, at the present time, break up the tables. The legacy application will still have to exist for a number of years due to the size of the port, and the .NET code is not a in-3-years-release type of project but will be phased in in releases along the way. As such, both the legacy system and the .NET code need to work with the same tables.

We are also aware that this is not an ideal solution so please refrain from advice like "you shouldn't have done it like this". We are well aware of this :)


One thing we've looked into is to create an XML file, or similar, with "contracts". So we could put into this XML file something like this:

  • There is an Employee class with these 50 properties
  • Additionally, we have these 7 variations, for various parts of the program
  • Additionally, we have these 10 pieces of logic, that each require property X, Y and Z (X, Y and Z varies between those 10)

This could allow us to code-generate those 8 classes (full class + 7 smaller variations), and have the generator detect that for variation #3, property X, Y and K is present, and I can then tie in either the code for the logic or the interfaces the logic needs into this class automagically. This would allow us to have a number of different types of employee classes, with varying degrees of property coverage, and have the generator automatically add all logic that would be supported by this class to it.

My code could then say that I need an employee of type IEmployeeWithAddressAndPhoneNumbers.

This too looks clunky.

From stackoverflow
  • I would suggest that eventually a database refactoring (normalization) is probably in order. You could work on the refactoring and use views to provide the legacy application with an interface to the database consistent with what it expects. That is, for example, break the employe table down in to employee_info, employee_contact_info, employee_assignments, and then provide the legacy application with a view named employee that does a join across these three tables (or maybe a table-based function if the logic is more complex). This would potentially allow you to move ahead with a fully ORM-based solution which is what I would prefer and keep your legacy application happy. I would not proceed with a mixed solution of ORM/direct SQL, although you might be able to augment your ORM by having some entity classes which provide different views of the same data (say a join across a couple of tables for read-only display).

  • "We can not, at the present time, break up the tables. The legacy application will still have to exist for a number of years due to the size of the port, and the .NET code is not a in-3-years-release type of project but will be phased in in releases along the way. As such, both the legacy system and the .NET code need to work with the same tables."

    Two words: materialized views.

    You have several ways of "normalizing in place".

    1. Materialized Views, a/k/a indexed views. This is a normalized clone of your source tables.

    2. Explicit copying from old tables to new tables. "Ick" you say. However, consider that you'll be incrementally removing functionality from the old app. That means that you'll have some functionality in new, normalized tables, and the old tables can be gracefully ignored.

    3. Explicit 2-way synch. This is hard, not not impossible. You normalize via copy from your legacy tables to correctly designed tables. You can -- as a temporary solution -- use Stored Procedures and Triggers to clone transactions into the legacy tables. You can then retire these kludges as your conversion proceeds.

    You'll be happiest to do this in two absolutely distinct schemas. Since the old database probably doesn't have a well-designed schema, your new database will have one or more named schema so that you can maintain some version control over the definitions.

  • Although I haven't used this particular ORM, views can be useful in some cases in providing lighter-weight objects for display and reporting in these types of databases. According to their documentation they do support such a concept: XPView Concepts

Compromising my integrity ?

I have recently written a small simple application that takes snapshots of your monitor(s) every X seconds / minutes. The idea is to keep a record of your activity.

More details here: http://www.artenscience.co.uk/artenscience/ScreenAudit.html

I've since had several emails from people asking for me to introduce a 'Stealth Mode', basically make the application invisible whilst running. Also to introduce an FTP or email mechanism for the captured screenshots.

Technically this is straightforward, however I can guess how this will be used ... almost certainly used as a way of capturing information covertly for dishonest purposes.

However it could also be used in a good way. But I know thats not the intention.

Do I develop this functionality knowing that it could and most probably will be used in ways that I am not comfortable with ?

At the moment I am tending to NOT do the development. What's the views of the community ?

I realise this is not a straightforward programming question, but I can't think of a better place to ask this.

From stackoverflow
  • Maybe you should talk to the TimeSnapper people about it. They do this already.

    Edit: Oh. I was wrong about timesnapper. No stealth mode. But i sure would be uncomfortable being forced to deliver daily productivity reports from timesnapper

  • Yes, I know about TimeSnapper. Like me they make it easy to tell that the application is running. The difference is I am being asked to develop 'stealth mode' so that people cannot tell that the program is running.

  • I would do the same thing TimeSnapper and LogMeIn do. Don't implement a 'stealth mode'.

    LogMeIn always shows a message that the pc is currently remote controlled.

    Maybe you should talk to the TimeSnapper people about it. They do this already.

    Timesnapper always leaves the tray icon visible.

  • Given that there is a huge probability that such a stealth mode will be used for, as you call it, dishonest purposes, I wouldn't do it. I guess there must also be laws against spying on people like this, so you may actually be making it easier for people to break the law.

    It also depends on your own purpose with the application. Why did you write it? Not only to make money --- otherwise, you wouldn't have posted this question here. If you suspect you may regret such a feature later on, just don't build it.

    Finally, if some users really intend to spy on other people's screens, they will find some way to do it, anyway. That doesn't mean, though, that you should make it easier for them!

  • Don't ask me why, but I've had in a former life some very good experience with the software made by the folks at Refog. If you dislike the idea of implementing such "features" you can probably point your people to this company.

  • It looks like you are developing software, and then making it available for purchase, rather than having a boss demanding these features, so I'd say don't do anything you would feel uncomfortable with.

    Note: I'm not saying that if there was a boss demanding something you would not want to provide that you should do it without a fight, but the difference between 'might not get some sales' and 'might get fired' is a significant one.

  • I wrote it to 'scratch an itch' basically and for my own purposes. Making money was secondary and onnodb sums it up I think. I would regret doing it. So I won't do it. It's just been listed on Apple Downloads site as well - so thats good news :-)

  • We (at TimeSnapper) get requests for that feature all the time (we had one just today in fact).

    We don't give in to it. It's a matter of principle. I fully agree with your opinion on this and I'm really pleased to see the responses people have given here, which basically support that stance.

    Best of luck with your competing work ;-)

  • You know, such systems can be gamed.

    For example, remember that run of Dilbert in which the employees have cameras strapped to their heads? Mincom actually does this. Well, not strapped to your head, but they do use cameras and keystroke loggers to check whether you're doing work or personal admin. A chum of mine used a second monitor placed out of camera for things he didn't want seen, and wrote (at home, obviously) an interesting piece of software to feed the logger a steady flow of characters from files in the source tree. We decided it was best not to go off the charts, so he only turned out 800 LoC/day.

    In point of fact my chum was doing the right thing; he just declined to submit to their invasive and insulting treatment, and objected to the use of an inappropriate "productivity" metric. If you think LoC is a good measure of productivity then you never met a cretin who churned out endless lines of rubbish code. I have. The company would have been better off if he'd spent his days googling porn, but on LoC the guy was a dynamo.

    Because it never crossed their petty, beauracratic minds that someone might do a "mission impossible" and play them prerecorded acceptable activity, my friend had surprising latitude. Once or twice he got one of those horrible little generic award pens you get from companies too lousy to spring for a perspex plaque. They thought he was a highly productive saint.

    Some (lawyers and middle management types) would be angrily horrified by this. As far as I can see, the company made its expectations very clear, and he met them.

    sonstabo : Could he not take on a new position? Work for a company that tape you to ensure that you are working?
    Peter Wone : The commute was convenient. Eventually he did move on.
  • By my reading, this goes against several imperatives of the ACM Code of Ethics. Not that you're a member (or want to be), but I'd consider what it means to do something that violates the Code of Ethics of your field's primary association. Has anyone given a good reason for needing this feature?

    What it's not about: your software being used in intended ways. Any software can be used in ways you don't intend. That's almost the whole point of the entire free and open source movements, for example. Torvalds probably doesn't lose any sleep wondering whether people are building nuclear weapons with Linux.

jQuery two sliders controlling each other

This is in reference to the question previously asked The problem here is, each slider controls the other. It results in feedback. How do I possibly stop it?

$(function() {
 $("#slider").slider({ slide: moveSlider2 });
 $("#slider1").slider({ slide: moveSlider1 });
 function moveSlider2( e, ui ) 
 {
  $('#slider1').slider( 'moveTo', Math.round(ui.value) );
 }

 function moveSlider1( e, ui ) 
 {
  $('#slider').slider( 'moveTo', Math.round(ui.value) );
 }
});
From stackoverflow
  • You could store a var CurrentSlider = 'slider';

    on mousedown on either of the sliders, you set the CurrentSlider value to that slider,

    and in your moveSlider(...) method you check whether this is the CurrentSlider, if not, you don't propagate the sliding (avoiding the feedback)

  • This is sort of a hack, but works:

    $(function () {
        var slider = $("#slider");
        var slider1 = $("#slider1");
        var sliderHandle = $("#slider").find('.ui-slider-handle');
        var slider1Handle = $("#slider1").find('.ui-slider-handle');
    
        slider.slider({ slide: moveSlider1 });
        slider1.slider({ slide: moveSlider });
    
        function moveSlider( e, ui ) {
            sliderHandle.css('left', slider1Handle.css('left'));
        }
    
        function moveSlider1( e, ui ) {
            slider1Handle.css('left', sliderHandle.css('left'));
        }
    });
    

    Basically, you avoid the feedback by manipulating the css directly, not firing the slide event.

  • You could just give an optional parameter to your moveSlider1 and moveSlider2 functions that, when set to a true value, suppresses the recursion.

  • A simpler approach which is kind of a hybrid of the above answers:

        var s1 = true;
        var s2 = true;
        $('#slider').slider({
         handle: '.slider_handle',
         min: -100,
         max: 100,
         start: function(e, ui) {
         },
         stop: function(e, ui) { 
         },
         slide: function(e, ui) {
          if(s1)
          {
           s2 = false;
           $('#slider1').slider("moveTo", ui.value);
           s2 = true;
          }
         }
        });
    
    
        $("#slider1").slider({ 
         min: -100, 
         max: 100,
         start: function(e, ui) {
         },
         stop: function(e, ui) { 
         },
         slide: function(e, ui) {
          if(s2)
          {
           s1 = false;
           $('#slider').slider("moveTo", ui.value);
           s1 = true;
          }
         }
         });
    
    });
    

Is a formal application framework too much?

Our shop designs and create custom software applications for a vareity of vertical industies. We currently use a modified version of the Csla framework for most of our development.

It's a great framework, supports a vareity of ways to communicate with a database, directly, remoting, WCF and so on. It offers a ton of features, many of which we do not use. The pros of the framework are numerous, the big one being the Rockford Lhotka is a step ahead when it comes to new technology, meaning we don't have to do the research. The cons of the framework are the fact that you are at the mercy of how the creator implements changes and technology and all the many features that you do not use.

With the advent of Linq-to-Sql we are seriously looking at making to switch, granted a lot of what is generated is purely data access, but by creating partial classes we could extend the data access and provide business logic. We could also create some formal interfaces for working with the business logic. Could use/create our rules manager, and so on. In a nutshell we would be growing our own application framework.

I noticed during Jeff Atwood's discussion the ASP.NET MVC framework at PDC 2008, he was primarily working with a single project, and it also looked like he was extending Linq-to-Sql with partial classes. This architecture seems to demonstrate the fact that the code is easily maintainable, new features are quickly added and bugs are quickly fixed, and that it performs well ... most of the time.

I'm just curious as to what other user's thoughts are? Am I crazy to abandon our framework for something that I perceive is easier to use and more maintainable?

From stackoverflow
  • The cons of the framework are the fact that you are at the mercy of how the creator implements changes and technology and all the many features that you do not use.

    It would seem that you will be exposed to these same cons with LINQ so keep that in mind when making a change. In any event you should do a complete analysis before making such a leap, perhaps by porting one of the smaller existing apps or a subset of one of the apps as a case study.

    mattruma : Good idea ... I will do this.
  • I read a good blog by Rick Strahl called A Simple Business Object Wrapper for LINQ to SQL that answers some of my questions. He takes some time an explains his viewpoints on frameworks.