Monday, April 11, 2011

Building up a monthly total from past 12 months

CurrentMonth = Month(CurrentDate)
CurrentYear = Year(CurrentDate)

    SQL = "SELECT Spent, MONTH(Date) AS InvMonth, YEAR(Date) As InvYear FROM Invoices WHERE YEAR(Date) = '" & CurrentYear & "' AND MONTH(Date) = '" & CurrentMonth & "'"
    RecordSet.Open SQL, Connection, adOpenStatic, adLockOptimistic, adCmdText
    Do Until RecordSet.EOF
        MTotal(i) = MTotal(i) + RecordSet.Fields("Spent")
        RecordSet.MoveNext
    Loop
    RecordSet.Close

This is the code I currently have to build up a total spent for a given month. I wish to expand this to retrieve the totals per month, for the past 12 months.

The way I see to do this would be to loop backwards through the CurrentMonth value, and if CurrentMonth value reaches 0 roll the value of CurrentYear back 1. Using the loop variable (i) to build up an array of 12 values: MTotal()

What do you guys think?

From stackoverflow
  • A group by should get you on the way.

    SELECT TOP 12
      SUM(Spent) AS Spent
      , MONTH(Date) AS InvMonth
      , YEAR(Date) AS InvYear
    FROM
      Invoices
    GROUP BY
      YEAR(Date), MONTH(Date)
    WHERE DATEDIFF(mm, Date, GETDATE(()) < 12
    


    Josh's DATEDIFF is a better solution than my original TOP and ORDER BY

    Tom H. : You probably don't want to use TOP 12. A better way would be to check for a date range. Also, avoid things like YEAR(date) = x. That prevents SQL Server from using any indexes on "date". Calculate the earliest and latest dates and do something like "date BETWEEN x AND y"
    Lieven : I Agree. The query has been updated.
  • The only problem with this is that I require a monthly total, for each of the past 12 months rather then the total for the past 12 months. Otherwise I see how improving the SQL rather then using vb6 code oculd be a better option.

  • I would tackle this by "rounding" the date to the Month, and then Grouping by that month-date, and totalling the Spent amount:

     SELECT SUM(Spent) AS [TotalSpent],
            DATEADD(Month, DATEDIFF(Month, 0, [Date]), 0) AS [MonthDate]
     FROM   Invoices 
     WHERE      [Date] >= '20080301'
            AND [Date] <  '20090301'
     GROUP BY DATEADD(Month, DATEDIFF(Month, 0, [Date]), 0)
     ORDER BY [MonthDate]
    

    The [MonthDate] can be formatted to show Month / Date appropraitely, or in separate columns.

    The WHERE clause can be parameterised to provide a suitable range of records to be included

    Kristen : P.S. Please ask if you would like an example of the "formatting" I refer to
    orip : +1, good solution, I ended up doing this. I also think you meant "truncating", not "rounding".
  • The solution I came up with would be :

    For i = 0 To 11
        If CurrentMonth = 0 Then
            CurrentMonth = 12
            CurrentYear = CurrentYear - 1
        End If
    
        SQL = "SELECT Spent, MONTH(Date) AS InvMonth, YEAR(Date) As InvYear FROM Invoices WHERE YEAR(Date) = '" & CurrentYear & "' AND MONTH(Date) = '" & CurrentMonth & "'"
        RecordSet.Open SQL, Connection, adOpenStatic, adLockOptimistic, adCmdText
        Do Until RecordSet.EOF
            MTotal(i) = MTotal(i) + RecordSet.Fields("Spent").Value
            RecordSet.MoveNext
        Loop
        RecordSet.Close
    
        CurrentMonth = CurrentMonth - 1
    Next
    

    I believe this should work as expected. However I still look forward to seeing what solutions you guys can come up, or if anyone spots an issue with ym fix.

Using ASP.NET default Model Binders with DateTime

Hi,

I've been trying to use the default ASP.NET MVC model binders but I'm having issues with binding DateTime. I've looked at Scott's post here but it seems to be too sophisticated. Is there a simpler solution to binding DateTime?

From stackoverflow

Extending the Visual Studio editor

I'm writing an add-in for extending Visual Studio 2008 (adding custom functionality). While doing that I came across a situation where it would be good if I could provide custom messages via tooltips for any line of code.

Doing this IMO requires accessing the VS editor control and by some way displaying the respective tooltip. This might also be visualized like breakpoints that on mouse hovering provide a tooltip message.

Is this implementation possible? If yes, what's the way to go? If not, what similar way do you suggest?

From stackoverflow
  • In a recent episode of Hanselminutes, they discussed how much more extensible the VS 2010 editor is going to be. Based on what they said, I think it will be rather difficult (although probably possible) until they introduce that extensibility.

    : Any other similar ideas on doing that?
  • You should take a look at Resharper and work out whether your add-in would be better suited to being a Resharper Plug-In. This approach would require your uses to buy Resharper, but this sort of thing is exactly what it does.

    For example, check out http://stylecopforresharper.codeplex.com/ where this sort of hint/tooltip stuff in Resharper is used to highlight lines of code that have StyleCop issues.

  • Yes it is. You can add more customization on the code editor of VS 2008, even you can create your own language service with your own intellisense support in VS 2008.

    You can do this simply by using VS 2008 SDK, and you can also quickly play with the standalone shell of Visual Studio 2008, and MS SQL Server 2008 Management Studio is one example of customized Visual Studio standalone shell.

    You can visit MS Visual Studio Extensibility:

    http://msdn.microsoft.com/en-us/vsx/default.aspx

    And lots of videos on VS 2008 SDK:

    http://msdn.microsoft.com/en-us/vsx/bb507746.aspx#extensibility

.Net inserting NULL values into SQL Server database from variable values

Hi. There have been similar questions but the answers weren't what I was looking for. I want to insert the a value of NULL into the SQL Server database if the reference is NULL or a value has not yet been assigned. At the moment I am testing for null and it looks like

String testString = null;

if (testString == null)
{
    command.Parameters.AddParameter(new SqlParameter("@column", DBNull.Value);
}
else
{
    command.Parameters.AddParameter(new SqlParameter("@column", testString);
}

This looks and feels incredibly clumsy to me. I have quite a few values that I am inserting into a database and to test them all like the above is very verbose. Does .Net not handle this in some way. I thought maybe if I used string as opposed to String but that also does not appear to work. Looking around I found articles which talk about using Nullable types.

System.Nullable<T> variable

This seems to work for primitives, int?, char? double? and bool?. So that might work for those but what about strings? Am I missing something here. What types should I be using for primitive values and for string values so that I do not have to repeatedly test values before inserting them.

EDIT: Before I get too many answers about ternary operators. I like them but not in this context. It doesn't make sense for me to need to test that value and have all that extra logic, when that sort of thing could have been inplemented lower down in the .Net framework and if I knew what types to give then it would get it for free.

Edit: Okay, so guys help me formulate my plan of attack. I will use the Nullable for my primitives (int?, double? etc) and for my Strings I will use the String but the ?? test. This keeps things less verbose. Is there anything that I am missing here, like maybe losing some semantics?

From stackoverflow
  • Maybe the ternary operator is something you find usefull.

  • What I sometimes do, is this:

    command.Parameters.Add ("@column", SqlDbType.VarChar).Value = DBNull.Value;
    
    if( String.IsNullOrEmpty (theString) == false )
    {
        command.Parameters["@column"].Value = theString;
    }
    
  • Nullable works great for primitives. I'm going to have to test the behavior of string but one option to at least clean up your code would be to define an extension method of string.

    You can use the ?? operator for strings:

    command.Parameters.AddParameter(new SqlParameter("@column", myNull ?? (object)DBNull.Value);

    ?? returns the first item if it is not null, other wise it returns the second item.

    Edit

    Fixed the code above so it will compile you need to cast DBNull.Value to an object.

    LukeH : The compiler will choke on this because it can't implicitly convert from string to DBNull.
    Jakob Christensen : I believe you can use testString ?? (string) DBNull.Value.
    recursive : How about ((object) testString) ?? DBNull.Vale
    JoshBerke : Yea this doesn't work just tested this out.
  • I agree that it is clumsy and a bit unfortunate that SqlParameter.Value must be set to DbNull.Value. But there is no way around it so you have to live with testing for null.

  • Nullable<T> cannot be applied to String since it is a reference type and not a value type.

    uriDium : Hi. Just looked it up, it is disappointing that string as opposed to String is still a reference type. Apparently it is an alias but then for instance string a = "hello"; string b = "h"; b += "ello"; then a==b returns true because it compares their values. Interesting.
    Gavin Miller : Yes & No - There's good reason why string is a reference type and not a value type (look at some of Jon Skeet's stuff he's a pro in that area); For such cases as you've specified that's why they created the StringBuilder class.
  • Even better than the ternary is the double-question-mark (??) operator. Takes the first non-null value. So:

    string x = null;
    command.Parameters.AddParameter(
         new SqlParameter("@column", (object)x ?? DBNull.Value);
    

    would give you a parm with a value of DBNull.Value, but

    string x = "A String";
    command.Parameters.AddParameter(
         new SqlParameter("@column", (object)x ?? DBNull.Value);
    

    Would give you a parm with "A String" as the value.

    LukeH : The compiler will choke on this because it can't implicitly convert from string to DBNull.
    WaldenL : You are correct. You need to downcast the string (x) to an object. Edited example.

When to do stored procedures and when not to

This is just a general discussion on what is the best occasion to use stored procedure! Personnally i have very low opinion on stored procs becouse; 1. they tie you to one particular database enviroment and 2. the idea of shuffling back and fourth between your interface code and the back-end enviroment for the stored procs its a nightmare for me, i prefer sticking to one enviroment when coding.

Having said this i have my coleagues who are madly in love with stored procs even for very small applications, i need a way to convince them that stored procs are not the best solution to every problem.

Guys what do you think of stored procs?

From stackoverflow
  • I agree with your comments about swapping coding environments and being tied to a specific DB, but there are some really good reasons for using SPs:

    1. Security: in high/properly secured systems pretty much no-one has direct access to the tables, including the admins. All access is via SPs which do have permissions, in that way all access can be monitored and controlled. You just can't do that any other way.

    2. If your code talks to SPs then the underlying DB structure can change without your code having to. Seperation and loose coupling is a real boon, especially when upgrading, it's common to write new UI code to talk to SP's prior to upgrading the legacy backend

    Craig : I am a bit skeptical about point 2. I have never seen a case where the DB structure changed and it didn't effect the application other than for trivial changes.
    Giovanni Galbo : I wish I could +1 Craig, he makes a very valid point.
    annakata : @Craig: I've actually seen it work in your favour a lot. This can be simple things like the expansion of a table into multiple tables, or a union with new related tables. Abstraction like this is very rarely a bad idea in itself.
    Jon Artus : @Annakata: Absolutely. In theory, you can entirely rewrite the database and the consuming application shouldn't care (if your abstraction doesn't leak, that is). It's the same reason you'd use interfaces in OOP.
    Kezzer : +1 for loose coupling. We use SPs at work all the time to perform advanced operations, or for large queries, or for things that need to take a set of parameters. We've got thousands of SPs.
    Eduardo Molteni : +1 Craig. In all my years of software development never have to confront with a single case when I need to modify the DB but not the UI or Business logic classes.
  • I think they're nice for poking around in the database manually (they are useful for quickly checking things etc), but don't use them for production (for the reasons you mentioned). Use server side business logic instead.

    Mark S. Rasmussen : Why would you use an SP for that? That sounds like ad-hoc querying through management studio / similar.
    tehvan : You don't always have the right admin tools to do certain things.
    Jon Artus : I think the advice "Don't use them [stored procedures] in production." is a very general (and incorrect in this case) statement and possibly needs more explanation...!?
    tehvan : @Jon: updated my post. Can you explain why it would be incorrect?
    Jon Artus : @tehvan: I'd always go SP-based in anything but the most trivial apps. The benefits in terms of security, abstraction and performance far outweigh the costs.
    S.Lott : +1: Code is code and belongs in the application. Data is data and belongs in the database. Stored procedures are code. It's just confusing to have code in two places.
    Guy : @slott: Data logic should be as close to the data as possible. The way you implement a feature in records and fields should be managed by sp's. APPLICATION logic (why your doing something) can be in higher tiers (the application / GUI) but the GUI should not manipulate the db directly.
    HLGEM : Gotta agree with Guy. Databases are accessed through far more than the UI and the code should be availble to all. I can't even imagine trying to manage change to a database when all the code is in the UI, What a nightmare!
    tehvan : @Hlgem: Of course there shouldn't be any db related code in the UI! That's what the DAO layer is for.
  • You have to admit some stuff it just makes more sense to do in a stored procedure. It's true that a lot of the stuff that people choose to implement as SPs can really by done in code (that is, in your application's code), but still, if you need to do some administrative work that goes over a whole bunch of tables and repeatedly run some task a SP usually makes more sense.

    annakata : the point about app code *should be* unrelated. The DB query should be about marshalling data, nothing else and definitely no business logic, but equally the APP should only be concerned about asking for the data, not worrying about exactly how the DB implements that.
  • If you are writing queries against your DB rather than using an ORM, and these queries need to interface with business code, then stored procedures is the best place to do it.

    SP's provide an excellent way of standardising your database access. If all you ever do is very simple one table selects then it might be ok to give them a miss, but if complex logic is needed in the database a stored procedure is a great way of standardising this. Plus you get all the syntax highlighting that a good editor provides, and you don't need to worry about escaping strings and stuff like that.

    I don't really like the argument that it makes migrating to another DMBS a pain. Migrating to another DBMS is always going to be a pain if you are doing anything of complexity in the DB, and who really does this on a regular enough basis to allow it to influence something that they do on a nearly daily basis (write queries against the DB).

    Mr. Shiny and New : how do stored procedures help with escaping strings? the call to the stored proc can still have SQL injected
    Jack Ryan : I wasn't talking about injection. I was talking about the fact that placing SQL inside c# strings for example can involve a lot of escape characters. With regards to injection in stored procedures, a properly parameterised SP will prevent the most determined malicious user.
  • If the database and its embedded logic is shared you're DRY when you use stored procedures, because that way all clients of the data use the same rules.

    If there's one, exclusive steward application, as in a service oriented architecture where a single service owns its own data, you can put the logic in either the service or the database.

    Some DBAs prefer stored procedures because they feel that they act like an interface to the underlying schema.

    I think the DB migration argument is overblown. It's seldom done, in my experience.

    Middle tier apps come and go, but data is forever. Those are the crown jewels of any business. I think it's best to do what's necessary to ensure the integrity of the data, and stored procedures can be a good way to do that.

    annakata : All good points, but especially the DB migration. I've never seen or heard of anyone actually doing this in 8 years.
    duffymo : Agreed - I wrote "seldom", but the right word is "never" based on my personal data. Once you buy Oracle or SQL Server, that's where you tend to stay.
    Matthew Watson : Database independence is only really needed for vender applications, where you have no control of what database its running on. Anything "internal", you would be crazy not to leverage the advantages of yoru chosen database for this fantasy of database independence.
  • Oracle's Tom Kyte - "Ask Tom" - has written something like this for Oracle: "if possible, do it in one SQL query, if not do it in a PL/SQL stored procedure, if that's not possible do it in a Java stored procedure, if that's not possible do it in application code". That's rather extreme, but I can see some general pros for stored procedures:

    • Easy access to the database, it is simply easier to do some things closer to the database
    • Performance, not having to send a lot of data back and forth between database and client or application server

    and one general con:

    • Much harder to keep SP code in sync with source code control
    Matthew Watson : I really fail to see how stored procedures are harder to keep in source control than anything other source. Keep your stored procedure/packages (everything should be in packages anyway) in a text file, check that into version control... no different to anything else.
    duffymo : @Matthew - exactly right. I agree. Database artifacts ought to be controlled just like middle tier code.
    Mr. Shiny and New : @duffymo: I agree. But often the DBAs are a separate team from the developers and they disagree. sigh.
    Nils Weinander : Matthew - the difference is the lack of direct source code repository integration in the database. With a text file in version control you still have that extra step of updating the procedures in the database. Preferrable would be version control integration in the database.
    ObiWanKenobi : "Much harder to keep SP code in sync with source code control". This is a big lie. Harder than what? What's so difficult about checking in a text file with the stored procedure text? If I want to, I can go directly to the web server and change the code of an ASP or PHP page. This is a process problem, not a technology problem.
  • Basically, I'd use SP if

    1. You are coding against a database which has good support for SP's (eg Oracle)
    2. If you are confident that the applications connecting to the database will change more often than the database, eg, "Application independence" is more important than "Database independence"
    HLGEM : excellent point about application independence!
  • A couple more advantages of SPs:

    • You can filter in the DB by using query parameters, rather than in the calling application. This allows you to return much smaller result sets from the DB, saving on both time and bandwidth.
    • Using query parameters rather than just concatenating SQL statements is a great way to guard against injection attacks.
  • In essence, SPs keep SQL code on the SQL server.

    You're writing SQL in its native environment:

    • you'll be using tools designed for working with SQL (query analyzers, optimization, auto-completion, diagrams)
    • DBAs are happy & productive living in this space

    SQL code won't appear in the normal code-base:

    • large opaque blocks of text in strings don't really add value to the coding experience
    • conversely, not having these large opaque blocks of text in source means they don't go into source control which would be a good thing (it is a pain to manage SQL schema versioning, even with handy tools like Redgate SQL Compare)
  • One of the most important reasons to use SPs is data integrity. It should be up to the database to maintain its own integrity, rather than the client. Imagine the case where you are deleting a customer with an address. This, with a property normalised database, would involve deletions from at least three tables: customer, address and customer-address (a join table). You should not have to rely on the client to remember to delete from the join table - there is too much risk that you will end up with orphaned rows. You should therefore call an SP that performs the deletion from all three tables in a single transaction. Database integrity applies to all inserts, updates and deletes, therefore I'd recommend using SPs for all DML commands.

    On the whole, I'm happy to query directly from the client, except in cases where you cannot be sure in advance how many queries will be necessary. For instance, if you want to get all of the payments made to an account, but for some reason (due to the data model), you are unable to get all of the data that you need in a single query but instead have to do a query for each payment, then the client is going to be making multiple queries to the database; multiply this by x concurrent users, and you end up in a mess. In these circumstances, it is best to use an SP, therefore requiring only a single interaction with the database.

    EDIT SPs also allow more flexible code reuse. Any application, written in more or less any language, can use the SPs. Furthermore, if this is a client database, then if you write proper interfaces (and documentation) to these SPs within a secure schema, the client can write there own applications to interact with the database.

    EDIT SPs are also, possibly, the most secure way of preventing SQL injection

  • I used to be a stored proc fan, but parameterized queries can work also. Just don't use in-line SQL -- it leaves you open to SQL injection problems.

    The performance differences are trivial.

    Many ORMs like NetTiers and LLBLGen use paramerized queries.

  • Count me firmly on the sp side. It is much simpler to use sps for acces than any other method. You make a change, you can upload it to prod without having to recompile your UI. Yes there are times when you have to change both the UI and the proc, but there are many,many times when just the proc needs changing.

    Security is better on procs unless you use dynamic sql (which should be avoided at all costs as it is impossible to fully test and less secure). This is becasue users do not have to have direct access to the tables and can only do what is specified inthe proc. Not only does this help in outside attacks, but far more importantly, it makes it much harder for the disgruntled or greedy employee to commit fraud or destroy your data. Any financial application or other business critical application that does not use procs is at risk for serious theft or destruction.

    Performance tuning is another area where procs can shine. It is much easier to performance tune a proc then upload the change, than find the code in the UI, figure out how to make it better and then recompile and upload the UI code. ALso if multiple steps are involved, the proc just works better to begin with. DBAs, who often only have access to the database code and not the UI code, are generally far better educated on performance tuning a database than the application developer, so making the code easier for them to work with is also a priority. Some tools like LINQ to SQl which create the code automatically are a nightmare to performance tune (and they do not create highly performant code in general from what I've seen). I wouldn't allow anyone who touches my database to use such a tool as performance is critical to databases.

    It is also easier for design teams to determine the effect of a potential change to the database when only procs are used. That way they don't have to search the code base for several different UIs and all the backend database stuff like SSIS packages to find what will be affected. This makes it far more likely that a database will be refactored to imporve performance or utility.

MVC + Templates

Hi all

I am working on a system that gets templatse dynamicly, they contain tags like {{SomeUserControl}} {{SomeContent}}

I was wonder how I could use MVC to render those templates and replacing the tags in the best possible way as the templates will be edited via a web front end, and the content / macros will be create from the same web front end.

From stackoverflow
  • You might wanna take a look at maybe using another view engine, here are some examples.

    NHaml
    Spark
    NVelocity
    Brail

    I'm sure there are many more but these are the ones I could think of.

viewDidAppear: called twice on modal view controller presented during startup

Resolution: While trying to recreate this bug in a fresh project to submit to Apple, I discovered that it is specific to iPhone OS 2.1, and compiling for 2.2 fixes the problem. Stephen, thanks for your help; I'll be accepting your answer since it would have worked if the bug still existed or I wasn't willing to compile for 2.2.


I have an app which is radically changing its database schema in a way that requires me to transform old-style records to new-style ones in code. Since users may store a lot of data in this app, I'm trying to display a modal view controller with a progress bar while it ports the data over (i.e. as the very first thing the user sees). This view controller's viewDidAppear: begins a database transaction and then starts a background thread to do the actual porting, which occasionally uses performSelectorInMainThread:withObject:waitUntilDone: to tell the foreground thread to update the progress bar.

The problem is, viewDidAppear: is being called twice. I noticed this because that "start a transaction" step fails with a "database busy" message, but setting a breakpoint reveals that it is indeed called two times—once by -[UIViewController viewDidMoveToWindow:shouldAppearOrDisappear:], and again by -[UIViewController modalPresentTransitionDidComplete]. Those names appear to be private UIViewController methods, so I'm guessing this is either a framework bug, or I'm doing something UIKit isn't expecting me to do.

Two relevant code excerpts (some irrelevant code has been summarized):

- (void)applicationDidFinishLaunching:(UIApplication *)application {
    (register some default settings in NSUserDefaults)

    // doing this early because trying to present a modal view controller 
    // before the view controller is visible seems to break it
    [window addSubview:[self.navigationController view]];

    // this is the method that may present the modal view
    [self.databaseController loadDatabaseWithViewController:self.navigationController];

    if(!self.databaseController.willUpgrade) {
     [self restoreNavigationControllerState];
    }
}

And from my DatabaseController class:

- (void)loadDatabaseWithViewController:(UIViewController*)viewController {
    (open the new database)

    (compute the path the old database would live at if it existed)

    if([[NSFileManager defaultManager] fileExistsAtPath:oldDBPath]) {
     (open the old database)

     [viewController presentModalViewController:self animated:NO];
    }
}

So, is there something I'm screwing up here, or should I file a bug report with Apple?

From stackoverflow
  • I saw this in my app too. I never got it entirely confirmed, but I think this is what's happening:

    1. Load root view
    2. Load modal view
    3. OS sends view did appear notification for the view in step 1
    4. The current view controller, which in this instance happens to be your DatabaseController class, picks it up
    5. OS sends the view did appear notification for the modal view
    6. The current view controller gets the notification. In this case it's the exact same controller as last time

    In my case I just reset what happened in the first call to viewDidAppear:.

    In your case two options spring to mind: a static variable to track whether you've started the upgrade already; or look at the UIView* parameter passed in before starting.

    Brent Royal-Gordon : As I noted above, I eventually discovered it was fixed in OS 2.2, but thanks for your suggestion.
    Stephen Darlington : No problem. Great to see the "real" answer!

Get all lucene values that have a certain fieldName

To solve this problem I created a new Lucene index where all possible distincted values of each field are indexed seperatly.

So it's an index with a few thousand docs that have a single Term.
I want to extract all the values for a certain term. For example, I would like all values that have the fieldName "companyName".
Defining a WildcardQuery is off course not a solution. Neither is enumerating ALL fields and only saving the ones with the correct fieldName.

From stackoverflow
  • This should work (I take it it still is in C#)

    IndexReader.Open(/* path to index */).Terms(new Term("companyName", String.Empty));
    
    borisCallens : ah, empty string :) (added C# tag)

What's the best way to remove all .svn directories throughout a directory hierarchy on Mac OS X?

Is there a command that would be useful?

From stackoverflow
  • Just do an export from the subversion repository.

  • find . -name .svn  -type d -print0 |xargs -0 rm -rf
    
    Jonathan Leffler : Add -type d just to be sure you don't remove files named .svn?
    Paul Tomblin : Ok, fair enough, I've added the -type d.
  • For systems that support it:

    find . -name .svn -delete
    

    or, if they don't support the -delete switch:

    find . -name .svn -exec rm -rf {} \;
    
  • In the root of the working copy folder, do

    svn export --force .
    

Missing StackTrace Information

I seem to be missing some information from my stack trace, here is what i'm getting:

at Foo.Bar(DataTable table) in D:\Foo\Bar\authoring\App_Code\FooBar.vb:line 87

Where is the rest of the stack trace infomation?

EDIT:

Custom errors in the Web.Config is set to off, i'm handling the error where it's caught like this:

 Catch ex As Exception
     Respose.Write(ex.StackTrace)
 End Try

The Stack Trace is still getting truncated.

From stackoverflow
  • Make sure customErrors is set to "RemoteOnly" or "Off" in your web.config to disable friendly errors.

    Or possibly stack trace getting reset? (Although if this was the case you still should see something)

    Will reset your stack trace.

    catch(Exception ex) 
    {
       throw ex; 
    }
    

    Will NOT reset your stack trace.

    catch(Exception ex) 
    {
       throw; 
    }
    

    EDIT:

    ex.StackTrace gets the current stack. The stacktrace starts where the exception was thrown(error happens) and ends at the current stack frame where the exception is caught. So it is reversing the call stack. Since you are writing out stacktrace as soon as it happens it doesn't get a chance to go any further up the callstack.

    Depending on what you are doing you can try a few things.

    //To just see the stackTrace
    Catch ex As Exception
         Throw
     End Try
    

    Environment.StackTrace - Gets current stack trace information

    //If you are trying to log the stacktrace 
    Catch ex As Exception
         Respose.Write(Environment.StackTrace)
         Respose.Write(ex.StackTrace)
     End Try
    
    //If is hiding then try - hiding as in throw ex vs just throw
    Catch ex As Exception
         //careful innerException can be null
         //so need to check before using it
         Respose.Write(ex.InnerException)
     End Try
    

    One of those methods should work for you.

Passing an Anonymous Type to UpdateModel/TryUpdateModel in ASPNETMVC

Given the following controller method:

    [AcceptVerbs("POST","GET")]
    public ActionResult apiMapInfo()
    {
        var x = new { Lat = "", Long = "", Name = ""};
        var mapInfo = new DALServices.Models.MapInfo();

// Updates correctly

        TryUpdateModel(mapInfo);

// Does not update correctly

        TryUpdateModel(x); 

        var svc = new APIServices.Services.ReturnMapInfo() {inputs = mapInfo};
        svc.Run();
        return new ObjectResult<Result>(new Result(svc.errorCode, svc.errorMessage, svc.results), svc.ExtraTypesForSerialization);
    }

The object x is not updated correctly by the TryUpdateModel method, but the mapInfo object is.

My assumption is that the TryUpdateModel method doesn't handle mapping to an anonymous type like x.

Thanks,

Hal

From stackoverflow
  • I'm guessing because UpdateModel and TryUpdateModel that it's looking for properties and reflection on anonymous types might be a bit different. Either way the easiest thing to do would just create a concrete type.

  • Anonymous types are immutable. Hence, they cannot be updated.

  • Anonymous types have readonly properties and thus there is no public settor available for TryUpdateModel to change the property value.

ASP.Net session and cookies for keeping someone logged in...

I've got an existing site I'm taking over, and right now it stores, in a session variable, the id of the currently logged in user (if logged in at all -- otherwise I'm sure it is empty string or null or something).

The client now wants, after someone is logged in, to "keep" them logged in on that computer for an indefinite amount of time.

ASP.net Sessions have a maximum idle time of 1 day, I believe. The website isn't written all that well in the Flash portion (whole front end is flash) and the flash will process a login, then, as long as the flash isn't reloaded, assume that the user is still "logged in".

I think my solution is to ALSO store a client side cookie with some GUID value and hold in the database the associated user id...sort of like a session that never expires. So, when the page is loaded, I can check my cookie, use that to select the userid out of the database, and if we find one, then set the session value that says user 23 is logged in.

Does anyone see any issues with this perspective? Would you recommend something different? I really don't want to refactor a bunch of the existing code, but just slip this in on top...

PS -- security is not really a concern. The only reason they have people log in is so we can track orders by a person, but no money changes hands through this website. There is also no personal information that a user can view or edit, either.

From stackoverflow
  • This is how I do it. I actually have a cookie that holds their login and password, this way I can automatically log them in should they not be logged in. I expire the cookie after a couple of days of inactivity. The downside is that everyone forgets their password because the only time they really have to enter their password is when they come back from extended time-off.

    This is for an internal application, with the same customer demands that you have and this works ... and makes the customer happy.

    One thing we may end up doing is just using Windows authenication, might actually work better in this circumstance.

    Matt Dawdy : Our users aren't on the same network, so Network Auth isn't an option here, but thanks for the advice. I'm going to store a guid in a cookie (instead of the username/password like you do) and look it up. Seems like we are on the same page. Thanks!
    mattruma : Excellent! For whatever it is worth I do encrypt the cookie values.
    Matt Dawdy : LOL! Yeah, even though it wouldn't matter, it just KILLS me to store usernames and passwords in plain text. I WILL do what you recommend then -- store username/password all encrypted client side in a cookie. No DB changes that way. Thanks again.
    WaldenL : Encrypted or not, storing the username and/or password client-side in a cookie is just bad form, IMHO.
  • That's the way I do it, but the problem with it (at least I think its a problem) is that when you store the username and password in a cookie there is not any encrypting when you add the cookie. If you look at the cookies in your browser the username and password are displayed there plain as day. Is it possible to get some kind of encrypting on the cookies you store? Or how would you handle this?

    Matt Dawdy : As I read the thread, I realize it's not that clear. What I meant was that, server side, I encrypt the username/password and then send that encrypted value as the cookie. On a subsequent request, I receive the encrypted value and decrypt it, server side. Nothing in clear text on client side.
  • Check this blog posting out http://timmaxey.net/archive/2009/03/06/asp.net-cookie-auto-log-in.aspx basically you needs to save the cookie with a guid a series, and a token, the token, in my case, changes all the time, the series is something that is generated based on something, like the guid and id combo or whatever, then the guid is always stored with the user. There is a cookie table to stored this info etc... pretty secure, not 100%, but pretty good... Tim Maxey

  • I recommend using the Enterprise Library Crypto App Block to store an encrypted cookie which is nothing more than a GUID. Get the GUID, and use a session table in the database to track user info.

    At the session start event, get the user info and cache it.

    Using the session object is not recommend for user info because it won't work on a web farm, unless you use a database for session state.

  • You're basically rolling your own session state at that point, and I'm fine with that. However, I would not go the route of storing the username/password in a cookie (even if encrypted). There's no way to expire that from the server-side. You can always remove your row in the table to force a user to log in again, but if they hold the username/password they hold the keys to the kingdom.

Environment variable to force .NET applications to run as 32bit

I've been told there is an environment variable you can set to force .NET applications to run as 32bit applications on x64 versions of Windows. Do you know what it is, or know of a reference on it? I'm afraid my google-fu has failed me today (or it doesn't exist).

I'm using Resolver One (a .NET spreadsheet) and want to access some 32bit only functionality without modifying the executable. If I can configure this from an environment variable then I can access 32bit functionality when needed but also run as a 64bit app when needed.

(NOTE: effectively I want to be able to switch whether an application runs as 32bit or 64bit at launch time instead of forcing it at compile time.)

Thanks

From stackoverflow
  • Check this: http://www.hanselman.com/blog/BackToBasics32bitAnd64bitConfusionAroundX86AndX64AndTheNETFrameworkAndCLR.aspx

    target platform in project property dialog btw.

    fuzzyman : No - that talks about setting 32bitness at compile time not at runtime. I have a .NET application that I *sometimes* want to run as a 32bit app.
  • How about this link

    Not quite an environment variable, but just use the CoreFlags tool to switch back and forth.

    To switch to 32 bit:

    CorFlags.exe TheApp.exe /32BIT+
    

    To go back to 64 bit:

    CorFlags.exe TheApp.exe /32BIT-
    
    fuzzyman : Hmmm... this is what I was trying to avoid, but may not have any choice - thanks.
    Eric Petroelje : @Ruben - fixed my answer.
    Ruben Bartelink : @Eric: Great stuff; Removed my comment
  • I've had an answer from Dino Veihland (Microsoft IronPython developer). I haven't had time to test it yet...

    It's COMPLUS_ENABLE_64BIT. I think setting it to 0 disables 64-bit.

    You should be able to set it as an env var or add a key to HKLM\Software\Microsoft.NETFramework with just the name "Enable_64Bit" to set it globally (this is how all the COMPlus_* vars work). This one might be special enough (it has to run before the process is created) that it has to be set in the reg key but I'm not entirely certain.

    fuzzyman : The environment variable didn't work for me, but setting the registry entry and rebooting did.
    UserControl : the registry setting worked for me too, thanks!
    Patrick Cuff : Registry setting worked like a charm, thanks :)

How can we use MSHTML with VBA?

I saw a lot of examples in MSDN on how to use MSHTML in VS. Have anyone known if and how we can use MSHTML and VBA to open web pages?

Thanks.

From stackoverflow
  • In the VBA editor, you go under Tools -> References and add a reference to the Microsoft HTML Object Library [MSHTML.TLB]. Here is a link with an example in VBA.

Adobe Reader Command Line Reference

Is there any official command line (switches) reference for the different versions of
Adobe (formerly Acrobat) Reader?

I didn't find anything on www.adobe.com/devnet/, ...

Especially I want to

  • Start Reader and open a file
  • Open a file at a specific position (page)
  • Close Reader (or single file)
From stackoverflow
  • You can find something about this in the Adobe Developer FAQ. (It's a PDF document rather than a web page, which I guess is unsurprising in this particular case.)

    The FAQ notes that the use of the command line switches is unsupported.

    To open a file it's:

    AcroRd32.exe <filename>
    

    The following switches are available:

    • /n - Launch a new instance of Reader ever if one is already open
    • /s - Don't show the splash screen
    • /o - Don't show the open file dialog
    • /h - Open as a minimized window
    • /p <filename> - Open and go straight to the print dialog
    • /t <filename> <printername> <drivername> <portname> - Print the file the specified printer.
  • I found this:

    http://www.robvanderwoude.com/commandlineswitches.php#Acrobat

    Open a PDF file with navigation pane active, zoom out to 50%, and search for and highlight the word "batch":

    AcroRd32.exe /A "zoom=50&navpanes=1=OpenActions&search=batch" PdfFile
    
  • Also found this pdf reference:

    http://www.adobe.com/devnet/acrobat/pdfs/pdf%5Fopen%5Fparameters.pdf

TFS Client API - Query to get work items linked to a specific file?

We are writing a custom tool using TFS client APIs, to connect to TFS, to fetch work items for a project etc.


We are querying the work item store, using WIQL.

Given the server path of a file, what is the easiest query to get the work items linked to this file from TFS?

From stackoverflow
  • Right-click the file in Solution Explorer and select View History. You will get a list of changesets. Double-clicking a changeset will bring up a dialog where you can see related work items.

    amazedsaint : Sorry, I'm talking about using WIQL to query the TFS from a custom tool we are developing.
  • I'm not sure that there is an easy way to do the query that you are requesting using the TFS API. I know that you definately cannot do it using WIQL. I think, using the API, you would have to iterate over all the work items - get the changeset links in them and then look in each changeset for the file path that you are after. This is obviously not much use.

    You could get that data using the TFS Data Warehouse database however. This information will lag behind the live operational store information because the warehouse only gets updated periodically - but will allow you to track things by the folder/file dimension pretty easily.

Where should I create ListViewItem list when using the MVP pattern?

Hi all,

I have a small application that I have written that uses the MVP pattern as follows:

  • I created an interface called IView
  • I implemented this interface in the Form
  • Passed in an instance of the form as type IView into the constructor of the presenter

The form contains a ListView component. The items that populates the ListView are created in the presenter. I heard that it is not a good idea to use UI component classes in the presenter. How and where should I create these ListViewItems? I could create the ListViewItems in the form itself but doesn't the form need to be as lightweight as possible with no logic in it?

Edit: N.B. This is a Windows Form application

From stackoverflow
  • The ListViewItems are view specific so you should create them in the view. If you create them in the presenter all views must depend on ListViewItems which is not good.

  • Create data items in the presenter. Assign these to the view and have the view use data binding to display the data items:

    //in presenter
    var dataItems = _someService.GetData();
    _view.Data = dataItems;
    
    //in view code-behind
    public ICollection<DataItem> Data
    {
        get; set; //omitted for brevity - will require change notification
    }
    
    //in view XAML
    <ListView ItemsSource="{Binding Data}">
      <ListView.View>
        <GridView>
          <GridViewColumn DisplayMemberBinding="{Binding Path=Name}"/> 
          <GridViewColumn DisplayMemberBinding="{Binding Path=Age}"/> 
        </GridView>
      </ListView.View>
    </ListView>
    

    HTH, Kent

    Draco : Hi Kent, this is a Windows Form application, sorry about that
    Kent Boogaart : lol - no problem. Same theory applies though. Presenter is agnostic of the UI and passes the data to the view. View can use binding or whatever to display the data.
  • I recently had the same conundrum, but for a tree view.

    To solve it nicely, you have to use delegates to handle the creation/conversion of data to visual elements.

    Example:

    class View
    {
      TreeNode Builder(object foo, object bar) { ... }
    }
    
    class Presenter
    {
      void InitView(View v)
      {
        Model.Build(v.Builder);
      }
    }
    

    Ok, that is very rough, but it allows you to quite easily build recursive structures like trees. :)

    NOTE: the model and view does not actually care about eachother's types.

  • I could create the ListViewItems in the form itself but doesn't the form need to be as lightweight as possible with no logic in it?

    A simple loop, and simple objects creation is not assumed to be difficult. Such code is fairly lilghtweight for a View:

    class SomeView 
    {
      void SetData(IEnumerable<DataItem> dataItems) 
      {
        foreach(DataItem dataItem in dataItems) 
        {
          ListViewItem lvi = new ListViewItem();
          lvi.Text = dataItem.Text;
          ...
        }
      }
    }
    

    Also, you can use Binding (as others suggested). This will simplify SetData even more.

    Try too keep View code such simple that you can "validate" it by fast code review :-)

  • WinForms as technology is not designed for MVP, so a good idea of concern separating should be applied with reason. I would expect control in form, presenter should be free of view-specific things. Any control itself is breaking the mvp, because it contains data and representation. As your application will grow, it will be harder and harder to keep it MVP-style. There's no much benefit in implementing MVP. Usually, with WinForms traditional control-style (component-style) works good.

Using a sub-domain of website to login users

Fogbugz-on-demand, 37 signals, and PB-Wiki all use sub-domains with custom url's for each group of users to access their login page. So it doesn't even seem possible for a user using those services to login to a generc login (that any user of the service could log into.)

At first I thought this was a terrible design flaw, but now I think this is a really good idea because of: 1) Additional security, it may not be much, but you now need 3 pieces of info: 1 sub-domain url, 2 login, 3 password

2) You can let the same login log into different accounts by the subdomain url.

3) This lends itself to more isolation, load balancing, and easy deployment since you can have the subdomains running under different servers in more isolation from each other.

4) You can gradually roll out changes to a certain set of sub-domains that are marked as BETA users before deploying them to all users.

Are sub-domain based logins for web applications the future? Are there pitfalls with this design approach I'm not seeing? Is it too confusing for non-programmer users to understand the sub-domain concept?

From stackoverflow
  • We went with a triple-element login instead. We ask for the username, password, and client-id. Same concept as the sub domains but there's a common login URL. That works fine too. However, I do see an advantage in the sub-domain process, you can brand the login page because you know the requested client. In our approach we can't brand the login page since we don't know the client until you submit the login page.

How can I increase the value in a HashTable?

I have a HashTable which I am keep track of colors (which are the keys) and count of colors which is the key.

I am trying to figure out how to increment the key when it the HashTable already contains the color. Here is a code snippet:

Hashtable htColors = new Hashtable();

if (htColors.Contains(color))
{
    // Want to increase the "value" of the key here.       
}
else
{
    htColors.Add(color, 1); //Found color for first time
}
From stackoverflow
  • Try the following

    if (htColors.Contains(color))
    {
       int old = (int)htColors[color];
       htColor[color] = old + 1;
    }
    

    EDIT Response to comments

    IMHO, the Dictionary approach is much better because it is 1) type safe and 2) eliminates the boxing involved in this solution.

    Having the line be the following won't affect the key, just the value

    htColor[color] = (int)htColor[color] + 1;
    
    Xaisoft : Ok, your solution appears to work. What do you think about the Dictionary alternative? Also, what if I just did (int)htColors[color] + 1, would that change the key or value?
    Xaisoft : Thanks for the help with the HashTable. It looks like I'll be going with a Dictionary after all.
    Michael Meadows : Someone can correct me if I'm wrong here, but I believe that auto boxing/unboxing occurs for any value type that is moved to the heap, even (in this case) ints in a typed dictionary (Dictionary). What Dictionary does elimintate is clutter from casting.
    JaredPar : @Michael, you're confusing lifetime with boxing. Boxing only occurs when a ValueType is converted to an object. Moving a ValueType into the heap in a strongly typed container incurs no boxing but does alter the variable lifetime. Eventually though the ValueType must be rooted by some reference.
  • I'm posting this to be pedantic. I don't like the interfacing to Dictionary because there is a cost to this very common kind of access - if your most common case is touching an element that already exists, you have to hash and look up your value 3 times. Don't believe me? I wrote DK's solution here:

    static void AddInc(Dictionary<string, int> dict, string s)
    {
        if (dict.ContainsKey(s))
        {
            dict[s]++;
        }
        else
        {
            dict.Add(s, 1);
        }
    }
    

    When put into IL - you get this:

    L_0000: nop 
    L_0001: ldarg.0 
    L_0002: ldarg.1 
    L_0003: callvirt instance bool [mscorlib]System.Collections.Generic.Dictionary`2<string, int32>::ContainsKey(!0)
    L_0008: ldc.i4.0 
    L_0009: ceq 
    L_000b: stloc.0 
    L_000c: ldloc.0 
    L_000d: brtrue.s L_0028
    L_000f: nop 
    L_0010: ldarg.0 
    L_0011: dup 
    L_0012: stloc.1 
    L_0013: ldarg.1 
    L_0014: dup 
    L_0015: stloc.2 
    L_0016: ldloc.1 
    L_0017: ldloc.2 
    L_0018: callvirt instance !1 [mscorlib]System.Collections.Generic.Dictionary`2<string, int32>::get_Item(!0)
    L_001d: ldc.i4.1 
    L_001e: add 
    L_001f: callvirt instance void [mscorlib]System.Collections.Generic.Dictionary`2<string, int32>::set_Item(!0, !1)
    L_0024: nop 
    L_0025: nop 
    L_0026: br.s L_0033
    L_0028: nop 
    L_0029: ldarg.0 
    L_002a: ldarg.1 
    L_002b: ldc.i4.1 
    L_002c: callvirt instance void [mscorlib]System.Collections.Generic.Dictionary`2<string, int32>::Add(!0, !1)
    L_0031: nop 
    L_0032: nop 
    L_0033: ret
    

    which calls to ContainsKey, get_item, and set_item, all of which hash and look up.

    I wrote something less pretty which uses a class that holds an int and the class lets you side-effect it (you can't really use a struct without incurring the same penalty because of struct copying semantics).

    class IntegerHolder {
        public IntegerHolder(int x) { i = x; }
        public int i;
    }
    static void AddInc2(Dictionary<string, IntegerHolder> dict, string s)
    {
        IntegerHolder holder = dict[s];
        if (holder != null)
        {
            holder.i++;
        }
        else
        {
            dict.Add(s, new IntegerHolder(1));
        }
    }
    

    This gives you the following IL:

    L_0000: nop 
    L_0001: ldarg.0 
    L_0002: ldarg.1 
    L_0003: callvirt instance !1 [mscorlib]System.Collections.Generic.Dictionary`2<string, class AddableDictionary.IntegerHolder>::get_Item(!0)
    L_0008: stloc.0 
    L_0009: ldloc.0 
    L_000a: ldnull 
    L_000b: ceq 
    L_000d: stloc.1 
    L_000e: ldloc.1 
    L_000f: brtrue.s L_0023
    L_0011: nop 
    L_0012: ldloc.0 
    L_0013: dup 
    L_0014: ldfld int32 AddableDictionary.IntegerHolder::i
    L_0019: ldc.i4.1 
    L_001a: add 
    L_001b: stfld int32 AddableDictionary.IntegerHolder::i
    L_0020: nop 
    L_0021: br.s L_0033
    L_0023: nop 
    L_0024: ldarg.0 
    L_0025: ldarg.1 
    L_0026: ldc.i4.1 
    L_0027: newobj instance void AddableDictionary.IntegerHolder::.ctor(int32)
    L_002c: callvirt instance void [mscorlib]System.Collections.Generic.Dictionary`2<string, class AddableDictionary.IntegerHolder>::Add(!0, !1)
    L_0031: nop 
    L_0032: nop 
    L_0033: ret
    

    Which calls get_item once - there is no additional hashing in the case of an object present. I got a little sleazy and made the field public to avoid the method calls for property access.

    If it were me, I would wrap this overall functionality into its own class and hide the IntegerHolder class from public view - here's a limited version:

    public class CountableItem<T>
    {
        private class IntegerHolder
        {
            public int i;
            public IntegerHolder() { i = 1; }
        }
        Dictionary<T, IntegerHolder> dict = new Dictionary<T, IntegerHolder>();
    
        public void Add(T key)
        {
            IntegerHolder val = dict[key];
            if (val != null)
                val.i++;
            else
                dict.Add(key, new IntegerHolder());
        }
    
        public void Clear()
        {
            dict.Clear();
        }
    
        public int Count(T key)
        {
            IntegerHolder val = dict[key];
            if (val != null)
                return val.i;
            return 0;
        }
    
        // TODO - write the IEnumerable accessor.
    }
    
    DK : +1 for a very good IL insight. This demonstrates why/when using a value type for dictionary bucket might be a bad idea.