Friday, May 6, 2011

Break the PDF document after 100 pages

I am working with Jasper and iReport tool. One of the requirements the client wants is that the PDF file will be generated to a 100 page document only.

Could you please help me? How can I generated the 100 page PDF document?

From stackoverflow
  • In the iReport you can find a built in variable PAGE_COUNT. For every element in the detail band you can put the following in the "Print when expression" textbox:

    Boolean.valueOf($V{PAGE_COUNT}.intValue() < 100)
    

    This will stop printing after page number 100.

IE WebControls TreeView - web application javascript problem

I am using the old Microsoft.Web.UI.WebControls TreeView control. When running under a web application the treeview.GetChildren() method through an 'object or proerty not supported error', yet the same code in an asp.net 2.0 website project runs fine.

Has anyone encountered this issue? There is almost nothing on the web about this control...

thanks for any help.

From stackoverflow
  • Because the web controls completely suck dude. Use javascript! Not that horrible non-standard-compliant stuff...

  • santiycr you realy should take some time to learn about web technologies before writing stupid comments.

Reflection performance for Data Access Layer

Hi all,
I had created a framework for a project in the past that one of it’s functionalities was to load the Database info into my business entity classes (only properties no methods) and from the Business entity classes to the database loading the parameters collection of the stored procedure to be executed. To this on that project I decorated the Business Entity Classes with the DB Filed info and SP Parameters like the sample below and let the framework load the entity or the Parameters collection using reflection so I didn’t had to generate new code for the maintenances.
But now I am creating a new and much bigger project, of course much more code to maintain, but where performance is critical and was wondering if it’s worth using reflection for all the load and kept the code much simpler or actually generate all the code and maintain all the changes?
I had did some search, read some of the documentation on MSDN, but still found lots of different opinions, people that liked reflection showing numbers that the overhead is not that bad, and others saying that it’s actually better to keep away from reflection

Technical specs for the new app:
Language: C#
.Net Version: 3.5
Application Type: Classic Web Forms accessing Logic components and Data Access Tier also in C#
Database: SQL Server 2008
Database Abstraction Layer: All access to the DB is made via Stored Procedures and User Defined Functions.


Sample Code:

    // Decorated class
[System.Serializable()]
public class bMyBusinessEntity{
    private Int64 _MyEntityID;
    private string _MyEntityName;
    private string _MyEntityDescription;

    [aFieldDataSource(DataColumn = "MyEntityID")]
    [aRequiredField(ErrorMessage = "The field My Entity ID is mandatory!")]
    [aFieldSPParameter(ParameterName="MyEntityID")]
    public Int64 MyEntityID{
        get { return _MyEntityID; }
        set { _MyEntityID = value; }
    }

    [aFieldDataSource(DataColumn = "MyEntityName")]
    [aFieldSPParameter(ParameterName = "MyEntityName")]
    public  string MyEntityName{
        get { return _MyEntityName; }
        set { _MyEntityName = value; }
    }
    [aFieldDataSource(DataColumn = "MyEntityDescription")]
    [aFieldSPParameter(ParameterName = "MyEntityDescription")]
    public string MyEntityDescription{
        get { return _MyEntityDescription; }
        set { _MyEntityDescription = value; }
    }
}


   // To Load from DB to the Object:
   using (DataTable dtblMyEntities = objDataSource.ExecuteProcedure(strSPName, objParams)) {
       if (dtblMyEntities.Rows.Count > 0) {
           DataRow drw = dtblMyEntities.Rows[0];
           oFieldDataSource.LoadInfo(ref objMyEntity, drw);
           return objMyEntity;
       }
       else
           throw new Exception(“Row not found!”);
  }

  // To Load from the Object to the DB
  oDataSource objDataSource = new oDataSource();
  IDbDataParameter[] objParams = objDataSource.GetProcedureParameters(strSPName);
  oFieldSPParameter.LoadInfo(objParams, objMyEntity);
  objDataSource.ExecuteNonQuery(strSPName, objParams);
From stackoverflow
  • Rather than rolling what is basically your own ORM, I would recommend switching to one of the established ORMs such as NHibernate or Entity Framework.

    To answer your question directly, reflection performance isn't that bad, but I'd personally never think of using an ORM I rolled myself on a large project.

  • Personally, I wouldn't use Reflection if the data access requirements call for a large number of transactions (a highly transactional system) - what you gain in flexibility ultimately costs you at runtime (more comment on reflection here).

    I'd pick a popular ORM solution in deference to a custom solution. Mainly you will benefit from a larger community of people using the same approaches (easier to get design advice, debug and also take advantage of known performance tweaks).

    It also usually means access to updates which support newer technology (e.g. SQL Server 2008) as it is released - you don't wear that burdon, or the cost of testing (other than straight implementation).

    There are a number of popular solutions including the Entity Framework and LINQ to SQL (in .Net 3.5 and both support Stored Procs) but also a great deal of support for a template-driven approach using CodeSmith templates/Net Tiers, or more complicated solutions using NHibernate or Deklarit, for example.

    The last big solution I was involved with used Stored Procedures and Functions in much the same way you have described, however we used the Enterprise Library and generated DAL access classes and data transfer objects using a handwritten tool. You could use much the same approach as used in MS Patterns and Practices 'Web Service Software Factory' potentially, or any template-driven approach.

Thursday, May 5, 2011

DRY vs. "prefer containment over inheritance"

There is a general rule of OO design that you should model is-a relationships using inheritance and has-a relationships using containment/aggregation and forwarding/delegation. This is further narrowed by the admonishment from the GoF that you should generally favor containment over inheritance, suggesting, perhaps, that if you could make a strong case for either one in a particular situation, that containment should generally get the nod due to the maintenance problems inheritance can sometimes cause.

I understand the reasoning behind this thinking, and I don't necessarily disagree with it. However, when I see a class with scores of methods, each just forwarding to some instance variable, I see a form of code duplication. Code duplication is, in my opinion, the ultimate code smell. Reimplementing an enormous protocol of methods just because the relationship between two classes isn't strictly is-a seems like overkill. It is additional, unnecessary code added to the system, code that now needs be tested and documented like any other part of the system--code that you likely wouldn't have had to write if you just inherited.

Do the costs of adhering to this containment-over-inheritance principal ever outweigh its benefits?

From stackoverflow
  • Yeah, what you're seeing there is a horrific collision of design paradigma from different corners of the universe: the GoF's aggregation/composition leveraging colliding with the "Law of Demeter".

    I am on record as believing that, in context of aggregation and composition use, the Law of Demeter is an anti-pattern.

    Contrary to it, I believe that constructs like person->brain->performThought() are absolutely right and appropriate.

  • The cost of virtually anything can outweigh its benefits. Having hard and fast, NO EXCEPTIONS rules will always get you into trouble with development. In general, it's a bad idea to use (if your language/runtime support it) reflection in order to gain access to variables that aren't intended to be visible to your code. Is this a bad practice? As with everything,

    it depends.
    

    Can composition sometimes be more flexible or easier to maintain than straight-up inheritance? Sure. That's why it exists. In the same way, inheritance can be more flexible or easier to maintain than a pure composition architecture. Then you have interfaces or (depending on the language) multiple inheritance.

    None of these concepts are bad, and people should be conscious of the fact that sometimes our own lack of understanding or resistance to change can cause us to create arbitrary rules that define something as "bad" without any real reason to do so.

    toby : Couldn't agree more. While it seems to be the natural tendency of engineers to want to find the "one true way", the reality is that everything depends on its context, there is just not a right answer for everything. And *that*, is the right answer for everything.
  • I agree with your analysis, and would favour inheritance in those cases. It seems to me that this is a bit the same thing as blindly implementing stupid accessors in an naive effort to provide encapsulation. I think the lesson here is that there simply isn't any universal rules that always apply.

  • This might not answer your question but there's something that has always bothered me about Java and its Stack. Stack, as far as my knowledge scopes, should be a very simple (or probably the most simple) container data structure with three basic public operations: pop, push and peek. Why would you want in a Stack insertAt, removeAt, et al. Kind of functionality? (in Java, Stack inherits from Vector).

    One may say, well, at least you don't have to document those methods, but why having methods that aren't supposed to be there on the first place?

    Emil H : Yes, this is really of-topic. But I do agree that the bloated interface of java.util.Stack is annoying.
    Min : This is a pretty interesting observation on some of the pitfalls of inheritance though. If your answer was rephrased, this would be pretty on topic. The Java stack could have contained a Vector and then only has pop, push, and peek. On another note, I think having more flexible data structures in the BCL is a good thing.
  • GoF is old text by now.. Depends on what OO environment you look at (and even then it is obvious you can come across it in OO-unfriendly pieces such as C-with-classes).

    For runtime environments you pretty much have no choice, single inheritance. And any workaround attempted to circumvent its limitations is just that, no matter how sophisticated or 'cool' it might seem.

    Again you will see this manifested everywhere, inclusive of C++ (most capable one) where it interfaces with C callbacks (which is widespread enough to make anyone pay attention). C++ however offers you mix-ins and policy-based designs with template features so it can occasionally help for hard engineering.

    While containment can give you benefits of indirection, inheritance can give you easily accessible composition. Pick you poison.. aggregation ports better but always looks like violation of DRY while inheritance can lead to easier reuse with different set of potential maintainance headaches.

    The real problem is that the computer language or modelling tool should give you an option what it ought to do independent of choice and as such make it less human error-prone; but not many people model before letting computers write programs for them + no good tools are around (Osla certainly isn't one) or their environment is pushing something as silly as reflection, IoCs and what not.. which is very popular and that in itself tells a lot.

    There was once a piece done for an ancient COM tech that was called Universal Delegator by one of the best Doom players around, but it isn't the kind of development anyone would adopt these days.. It required interfaces sure (and not a real-hard-requirement for general case). The idea is simple, it predates all the way to interrupt handling.. Only similar aproaches give you the best of both worlds and they are somewhat evident in scripted pieces suches as JavaScript and functional programming (although far less readable or performing).

  • You've got to pick the right solution to the problem at hand. Sometimes inheritence IS better than containment, sometimes not. You have to use your judgment, and when you can't figure out which way to go, write a little code and see how bad it gets. Sometimes writing some code can help you make a decision that's not obvious otherwise.

    As always: the right answer depends on so many factors that you can't make hard-and-fast rules.

    chaos : Some people do. I can kinda see it. It's very relaxing compared to actual work.
    Michael Kohne : I've seen the worst of both ways. One app I deal with has a main class that inherits from 20 (no, I'm not kidding) mostly (but not completely) abstract classes. 80-90% of the functions in that class just call another function. Lots of inheritance AND lots of boilerplate! I try VERY hard not work with that app...
  • To some degree, this is a question of language support. For example, in Ruby we could implement a simple stack that uses an array internally like so:

    class Stack
      extend Forwardable
      def_delegators :@internal_array, :<<, :push, :pop
      def initialize() @internal_array = [] end
    end
    

    All we're doing here is declaring what subset of the other class's functionality we want to use. There's really not a lot of repetition. Heck, if we actually wanted to reuse all of the other class's functionality, we could even specify that without actually repeating anything:

    class ArrayClone
      extend Forwardable
      def_delegators(:@internal_array, 
                      *(Array.instance_methods - Object.instance_methods))
      def initialize() @internal_array = [] end
    end
    

    Obviously (I hope), that's not code I would generally write, but I think it shows that it can be done. In languages without easy metaprogramming, it can be somewhat harder in general to keep DRY.

  • While I agree with everyone who has said "it depends" -- and that it's also language-dependent to a degree -- I'm surprised no one has mentioned that (in the famous words of Allen Holub) "extends is evil". When I first read that article I have to admit I got a little put off, but he's right: regardless of the language, the is-a relationship is about the tightest form of coupling there is. Tall inheritance chains are a distinct anti-pattern. So while it's not right to say you should always avoid inheritance, it should be used sparingly (for classes -- interface inheritance is recommended). My object-orientation-noob tendency was to model everything as an inheritance chain, and yes, it does reduce code duplication, but at a very real cost of tight coupling, which means inevitable maintenance headaches somewhere down the road.

    His article is much better at explaining why inheritance is tight coupling, but the basic idea is that is-a requires every child class (and grandchild and so on) to depend on the implementation of the ancestor classes. "Programing to the interface" is a well-known strategy for reducing complexity and assisting with agile development. You can't really program to the interface of a parent class because the instance is that class.

    On the other hand, using aggregation/composition forces good encapsulation, making a system much less rigid. Group that reusable code into a utility class, link to it with a has-a relationship, and your client class is now consuming a service provided according to a contract. You can now refactor the utility class to your heart's content; as long as you conform to the interface, your client class can remain blissfully unaware of the change, and (importantly) it shouldn't have to be recompiled.

    I'm not proposing this as a religion, just a best practice. They're meant to be broken when needed, of course, but there's generally a good reason that "best" is in that term.

  • You could consider implementing the "decorator" code in an abstract base class that (by default) forwards all method calls to the contained object. Then, subclass the abstract decorator and override/add methods as necessary.

    abstract class AbstractFooDecorator implements Foo {
        protected Foo foo;
    
        public void bar() {
            foo.bar();
        }
    }
    
    class MyFoo extends AbstractFooDecorator {
        public void bar() {
            super.bar();
            baz();
        }
    }
    

    This at least eliminates repetition of the "forwarding" code, if you've got many classes wrapping a specific type.

    As for whether or not the guideline is a useful one, I suppose emphasis should be placed on the word "prefer". Obviously there will be cases where it makes perfect sense to use inheritance. Here's an example of when inheritance should not have been used:

    The Hashtable class was enhanced in JDK 1.2 to include a new method, entrySet, which supports the removal of entries from the Hashtable. The Provider class was not updated to override this new method. This oversight allowed an attacker to bypass the SecurityManager check enforced in Provider.remove, and to delete Provider mappings by simply invoking the Hashtable.entrySet method.

    The example highlights that testing is still required for classes in an inheritance relationship, contrary to the implication that one only needs to maintain/test "encapsulating"-style code -- the cost of maintaining a class that inherits from another might not be as cheap as it first appears.

Why do I am getting "Platform Not Supported Exception" while adding new Response Header?

Why do I am getting "Platform Not Supported Exception" while adding new Response Header? I am debugging website using Visual Studio webserver.

code: Response.Headers["X-XRDS-Location"] = url

Exception Message: "This operation requires IIS integrated pipeline mode."

Any help would be appreciated...

From stackoverflow
  • Response.Headers.Add() works only in IIS7 integrated pipeline mode. Use Response.AddHeader() instead. This method will work on all platforms.

    Software Enthusiastic : Thank you very much...

Porting a PowerBuilder Application to .NET

Does anyone have any advice for migrating a PowerBuilder 10 business application to .NET?

My company is considering migrating a legacy PB application to .NET (C#) and I am just wondering if anyone has any experience - good or bad - that you would like to share.

The application is rather large with 10 PBL libraries, some PFC as well as custom frameworks. There are a large number of DLL calls being made as well. Finally, it uses a Microsoft SQL Server database.

We have discussed porting the "core" application code to .NET and then porting more advanced functionality across as-needed.

From stackoverflow
  • If its rather large, you might have better results writing a front-end for it in .net (or a web-based GUI) and using that to interact with your PB code, assuming you can expose the functionality it as an API.

    If you're using PB 9 or greater, you can generate COM or .NET dlls, that you can then consume by a C# GUI. I'd recommend this over a rewrite in any new language.

    Remember, rewrites are never a silver bullet, they always end up more time-consuming, difficult, and buggy than you first expect.

  • I think gbjbaanb gave you a good answer above.

    Some other questions worth considering:

    • Is this PB10 app a new, well-written PB10 app, or was it one made in 1998 in PB4, then gradually converted to PB10 over the years? A well-written app should have some decent segregation between the business logic and the GUI, and you should be able to systematically port your code to .Net. At least, it should be a lot easier than if this is a legacy PB app, in which case it would be likely that you'd have tons of logic buried in buttons, datawindows, menus, and who knows what else. Not impossible, but more difficult to rework.
    • How well is the app running? If it's OK and stable, and doesn't need a lot of new features, then maybe it doesn't need rewriting. Or, as gbjbaanb said, you can put .Net wrappers around some pieces and then expose the functionality you need without a full rewrite. If, on the other hand, your app is cantankerous, nasty, not really satisfying business needs, and is making your users inefficient, then you might have a case for rewriting, or perhaps some serious refactoring and then some enhancements. There are PB guys serving sentences, er, I mean, making a living with the second scenario.

    I'm not against rewrites if the software is exceedingly poor and is negatively affecting the company's business, but even then gradual adjustments and improvements are a less risky way to achieve system evolution.

    Also, don't bail on this thread until after Terry Voth posts. He's on StackOverflow and is one of the top PB guys.

  • You might want to spend some time investigating PowerBuilder 11.5 (recently released) which adds some significant .NET integration.

    Migrating to PowerBuilder 11.5 in order to make use of new .NET code will certainly be a lot easier than completely rewriting the entire app in C#.

  • When I saw the title, I was just going to lurk, being a renowned PB bigot. Oh well. Thanks for the vote of confidence, Bernard.

    My first suggestion would be to ditch the language of self-deception. If I eat half of a "lite" cheesecake, I'm still going to lose sight of my belt. A migration can take as little as 10 minutes. What you'll be doing is a rewrite. The time needs to be measured as a rewrite. The risk needs to be measured as a rewrite. And the design effort should be measured as a rewrite.

    Yes, I said design effort. "Migrate" conjures up images of pumping code through some black box with a translation mirroring the original coming out the other side. Do you want to replicate the same design mistakes that were made back in 1994 that you've been living with for years? Even with excellent quality code, I'd guess that excellent design choices in PowerBuilder may be awful design choices in C#. Does a straight conversion neglect the power and strengths of the platform? Will you be living with the consequences of neglecting a good C# design for the next 15 years?


    That rant aside, since you don't mention your motivation for moving "to .NET," it's hard to suggest what options you might have to mitigate the risk of a rewrite. If your management has simply decided that PowerBuilder developers smell bad and need to be expunged from the office, then good luck on the rewrite.

    If you simply want to deploy Windows Forms, Web Forms, Assemblies or .NET web services, or to leverage the .NET libraries, then as Paul mentioned, moving to 11.0 or 11.5 could get you there, with an effort closer to a migration. (I'd suggest again reviewing and making sure you've got a good design for the new platform, particularly with Web Forms, but that effort should be significantly smaller than a rewrite.) If you want to deploy a WPF application, I know a year is quite a while to wait, but looking into PowerBuilder 12 might be worth the effort. Pulled off correctly, the WPF capability may put PowerBuilder into a unique and powerful position.

    If a rewrite is guaranteed to be in your future (showers seem cheaper), you might want to phase the conversion. DataWindow.NET makes it possible to to take your DataWindows with you. (My pet theory of the week is that PowerBuilder developers take the DataWindow for granted until they have to reproduce all the functionality that comes built in.) Being able to drop in pre-existing, pre-tested, multi-row, scrollable, minimal resource consuming, printable, data-bound dynamic UI, generating minimal SQL with built-in logical record locking and database error conversion to events, into a new application is a big leg up.

    You can also phase the transition by converting your PowerBuilder code to something that is consumable by a .NET application. As mentioned, you can produce COM objects with the PB 10 you've got, but will have to move to 11.0 or 11.5 to produce assemblies. The value of this may depend on how well partitioned your application is. If your business logic snakes through GUI events and functions instead of being partitioned out to non-visual objects (aka custom classes), the value of this may be questionable. Still, this is a design faux pas that should probably be fixed before a full conversion to C#; this is something that can be done while still maintaining the PowerBuilder application as a preliminary step to a phased and then a full conversion.

    No doubt I'd rather see you stay with PowerBuilder. Failing that, I'd like to see you succeed. Just remember, once you take that first bite, you'll have to finish it.

    Good luck finding that belt,

    Terry.


    I see you've mentioned moving "core components" to .NET to start. As you might guess by now, I think a staged approach is a wise decision. Now the definition of "core" may be debatable, but how about a contrary point of view. Food for thought? (Obviously, this was the wrong week to start a diet.) Based on where PB is right now, it would be hard to divide your application between PB and C# along application functionality (e.g. Accounts Receivable in PB, Accounts Payable in C#). A division that may work is GUI vs business logic. As mentioned before, pumping business logic out of PB into executables C# can consume is already possible. How about building the GUI in C#, with the DataWindows copied from PB and the business logic pumped out as COM objects or assemblies? Going the other way, to consume .NET assemblies in PB, you'll either have to move up to 11.x and migrate to Windows Forms, or put them in a COM callable wrapper.

    Or, just train your C# developers in PowerBuilder. This just may be a rumour, but I hear the new PowerBuilder marketing tag line will be "So simple, even a C# developer can use it." ;-)

    Justin Ethier : Wow, thank you for such a detailed answer! We are still in the early phases of determining whether a rewrite is feasible. I suppose you are right - I need to be honest about this from the get-go as it will be a huge effort to rewrite our application, if that is the path ultimately taken. Our main motivation in moving to .NET is that the developers here have much more .NET experience than with PB, although I agree that it is easy to take the DataWindow's strengths for granted. Anyway, thanks again for your insight.
    Bernard Dy : Outstanding post Terry. Sorry to put you on the spot, but I've appreciated all the great things PB you've done and knew you'd posted in other PB questions on SO. And your pet theory isn't a theory, it's reality. Almost everyone I talk to discounts PB for Java or .Net. It sucks not to be able to take advantage of the tools Java and .Net have, but PB is still powerful and people don't appreciate how many apps are still running on it. If only so many PB implementations didn't misuse its power and flexibility!
  • I don't know if it's good or not but check this (commercial) product : PB.Net

    Justin Ethier : Thanks for the link. Another person on the team actually just discovered this as well. Its a shame there is not more information about it or a free demo/trial version available, but it sounds promising.
  • My pet theory of the week is that PowerBuilder developers take the DataWindow for granted until they have to reproduce all the functionality that comes built in.

    I'd back that theory. I went though an attempted conversion from PB8 to Java on a project several years ago that failed miserably, even using the first-gen HTML DataWindow. My current employer is hell-bent on moving to C#, not using Datawindow.NET despite > 2K DWOs in our current product. I'm not looking forward to the day when the realization sets in. (the entire product consist of several user applications, more than a dozen services, and use about 70 PBDs)

    OP - unless your application is unusually well-structured (originally written for EA Server maybe?), this will not be a port. Things work too differently in the PB & .NET environments for a plain port to work satisfactorily. I cannot stress this enough - if you're really using the PB event model, a "port" will likely be a failure.

    You need to look at logic flow (intertwined UI & process), control flow (who owns the process or data right now), data access (UI, data layer, ??) and the parts of the DW event model you're using from code. If you're thinking about ASP.NET (as we are), your whole user interaction experience will have to change, and that will feed back into the other considerations.

    Not directly related to code, build automation will change (we use PowerGen for consistent PB builds; MSBuild is very different) as will your installation & setup.

  • I think anyone considering this for a large app would be pretty crazy not to very seriously consider using the DataWindow.NET, so as not to lose their investment in the DWs.

  • PHB's at major corporations think that Powerbuilder is a toy language and migrating to a new language like C# is trivial and can be done at a low cost. In fact, migrating a PB application to any other language will cost at least as much as developing an entirely new application on the new language. The resulting app will generally lose functionality compared to the original and will result in user dissatisfaction. I have seen a number of attempts - all have failed because of the difficulty and the user issues.

    If it ain't broke, don't fix it.

Does execCommand SaveAs work in firefox?

Why does this not work in ff/chrome?

javascript: document.execCommand('SaveAs','true','http://www.google.com');

(used as a bookmarklet)

From stackoverflow
  • As Microsoft puts it, "There is no public standard that applies to this method."

  • execCommand is not completely standardized across browsers. Indeed, execCommand('SaveAs', ...) only seems to be supported on IE. The recommended way to force a save-as would be to use a content-disposition: attachment header, as described in http://www.jtricks.com/bits/content_disposition.html

    Since this is part of the HTTP header, you can use it on any file type. If you're using apache, you can add headers using the .htaccess file, as described here. For example:

    <FilesMatch "\.pdf$">
    <IfModule mod_headers.c>
    Header set Content-Disposition "attachment"
    # for older browsers
    Header set Content-Type "application/octet-stream"
    </IfModule>
    </FilesMatch>
    
    Andrej : I think content disposition is part of the HTTP header, not part of the document, so you should be able to use it for pdf files.
    bdonlan : Indeed you can, and here's an example of just that :)
    bdonlan : Copy them to your server? :)
    bdonlan : But more seriously, I don't know. Try opening up another question about that specific topic, maybe someone else will.
  • Firefox doesn't support execCommand. In fact it seems to be IE-only.

    lc : not that I know of, you'll want to use the content-disposition header as bdonlan suggests.