Sunday, February 13, 2011

Propagate Permissions to Javascript

I'm debating the best way to propagate fairly complex permissions from the server to an AJAX application, and I'm not sure the best approach to take.

Essentially, I want my permissions to be defined so I can request a whole set of permissions in one shot, and adjust the UI as appropriate (the UI changes can be as low level as disabling certain context menu items). Of course, I still need to enforce the permissions server side.

So, I was wondering if anyone has any suggestions for the best way to

-maintain the permissions and use them in server code -have easy access to the permissions in javascript -not have to make a round-trip request to the server for each individual permission

Thoughts?

-Jerod

  • Encode them as JSON.

    From Diodeus
  • If you transmit the permission structure to the client as a JSON object (or XML, if you prefer), you can manipulate that object with the client-side code, and send it back to the server, which can do whatever it needs to validate the data and persist it.

    From pkaeding
  • I don't necessarily see it as the most "correct" solution, but would it be possible to keep all the permission stuff on the server side, and just serve the updated UI rather than some kind of JSON permissions system?

    You'd have to make the decision based on how busy and intensive your app expects to be, but definitely a decision worth making either way

    CMPalmer : That would be the ASP.Net preferred approach, but at a certain granularity on any type system, there are advantages to having client side control as well.
    From Gareth
  • If you have a clear set of permissions, like a "user level" or "user type", you could just pass the value down in a hidden field and access the value through the DOM. You could still do this if your permissions were more granular, but you would either have a lot of hidden fields or you would have to encode the information into XML or JSON or some other format.

    You might set them as bit flags so that you could OR a single numeric value with a mask to see if the user had the permission for a specific activity. That would be very flexible and as long as you don't have more than 32 or so specific "rights", that would allow for any permutation of those rights in a very small package (basically an unsigned int).

    For example:

    0x00000001 //edit permission
    0x00000002 //create new thing permission
    0x00000004 //delete things permission
    0x00000008 //view hidden things permission
       .
       .
       .
    0x80000000 //total control of the server and everyone logged in
    

    Then a user with a permission of 0x000007 could edit, create, and delete, but nothing else.

    In either case, I think you're on the right track - make the request once per page invocation, store the permissions in a global JavaScript data structure, and go from there. AJAX is nice, but you don't want to query the server for every specific permission all over your page. You would do it once on the page load, set up the presentation of your page and save the value in a global variable, then reference the permission(s) locally for event functions.

    From CMPalmer

Dictionary<string, MyObject> or List<MyObject> with C# 3.5?

I often use Dictionary in C#2.0 with the first key as string that was containing a unique identifier.

I am learning C#3.0+ and it seems that I can now simply use a List and simply do LINQ on that object to get the specific object (with the .where()).

So, if I understand well, the Dictionary class has lost its purpose?

  • no, a dictionary is still more efficient for getting things back out given a key.

    a list you still have to iterate through the list to find what you want. A dictionary does a lookup.

  • IMHO the Dictionary approach will be MUCH faster than LINQ, so if you have an array with a lot of items, you should rather use Dictionary.

  • If you just have a List, then doing an LINQ select will scan through every item in the list comparing it against the one you are looking for.

    The Dictionary however computes a hash code of the string you are looking for (returned by the GetHashCode method). This value is then used to look up the string more efficiently. For more info on how this works see Wikipedia.

    If you have more than a few strings, the initial (List) method will start to get painfully slow.

  • Dictionary is implemented as a hashtable. Thus it should give constant time access for lookups. List is implemented as a dynamic array, giving you linear time access.

    Based on the underlying data structures, the Dictionary should still give you better performance.

    MSDN docs on Dictionary

    http://msdn.microsoft.com/en-us/library/xfhwa508.aspx

    and List

    http://msdn.microsoft.com/en-us/library/6sh2ey19.aspx

    From biozinc

Can anyone help me convert this ANTLR 2.0 grammar file to ANTLR 3.0 syntax?

I've converted the 'easy' parts (fragment, @header and @member declerations etc.), but since I'm new to Antlr I have a really hard time converting the Tree statements etc.

I use the following migration guide.

The grammar file can be found here....

Below you can find some examples where I run into problems:

For instance, I have problems with:

n3Directive0!:
                d:AT_PREFIX ns:nsprefix u:uriref
                {directive(#d, #ns, #u);}
                ;

or

propertyList![AST subj]
        : NAME_OP! anonnode[subj] propertyList[subj]
        | propValue[subj] (SEMI propertyList[subj])?
        |               // void : allows for [ :a :b ] and empty list "; .".
        ;

propValue [AST subj]
        :  v1:verb objectList[subj, #v1]
                // Reverse the subject and object
        |  v2:verbReverse subjectList[subj, #v2]
        ;

subjectList![AST oldSub, AST prop]
        : obj:item { emitQuad(#obj, prop, oldSub) ; }
                (COMMA subjectList[oldSub, prop])? ;

objectList! [AST subj, AST prop]
        : obj:item { emitQuad(subj,prop,#obj) ; }
                (COMMA objectList[subj, prop])?
    | // Allows for empty list ", ."
    ;
  • n3Directive0!:
                    d=AT_PREFIX ns=nsprefix u=uriref
                    {directive($d, $ns, $u);}
                    ;
    
    • You have to use '=' for assignments.
    • Tokens can then be used as '$tokenname.getText()', ...
    • Rule results can then be used in your code as 'rulename.result'
    • If you have rules having declared result names, you have to use these names iso.
      'result'.

SEO google keyword position tools?

Hi guys,

I want to check our google postions for several keywords every day and make a note in a spreadseet. At the moment, we have a student doing it but it's a rubbish job and it doesn't seem fair on them!

Are there any tools available to automate this process? I have tried rankchecker by seobook.com, but although that should be exactly what im looking for when i set scheduled tasks in that, it doesnt work.

Any tips would be appreciated, thanks!

peter

EDIT: Liam has suggested a Python script to do this, which unfortunately isnt something I'm very familiar with! If anyone knows of a good tutorial or something to help us with this, that would be brilliant.

Update:

Found a php script at seoscript.net which looks like a step in the right direction.

But I cant get it to work! I get this error.

Anyone more knowledgabe than me know how to fix that? I have PEAR installed.

thanks again,

Peter

  • have the student come up with an automatic way of doing this. Sounds like a good exercise for them.

    Bill the Lizard : Tomorrow the student will ask the same question, and we can link to this answer, closing his question as "Exact Duplicate". :)
    From Robert
  • I use a Python script to check the number of results for a set of searches each day and log the results. Then I run another script that builds a spreadsheet from the log files. Checking your position in the results page would be only a little more complex.

    Liam : I hope to share it once I have cleaned it up a bit.
    Peterl86 : thanks liam, that sounds like it could really help.
    From Liam
  • I think it is probably ALOT easier to let the student do it the old fashioned way :p

    From DrG
  • I'm not sure how often it updates, but Google Webmaster Tools shows your rank for the top 20 queries in which your site appeared.

    Otherwise, writing a script or a small program in your favorite programming language should do the trick, as was already suggested.

  • Don't forget rankings vary data center to data center, and are trending more towards being tweaked towards your specific search and browsing history.

    Not to sell you anything, but what you should be looking for is more a suite of site quality metrics and less simple ranking metrics, the outputs of which are increasingly questionable.

    If I were you, I would look into getting some professional help from an SEM firm with a strong development team, as they usually have some pretty advanced metrics working under the hood. Otherwise, just remember that content is king: the W3C is pretty clear about which markup has semantic value, and Google capitalizes on this.

  • It's not a PHP script or something but a nice tool to check your rank at google for some keywords (and it's free software): Google Monitor

    Peterl86 : cheers bud, I'll look into this tomorrow and report back!
  • What a coincidence, I created SERF (Search Engine Rank Finder) for this very purpose. Works with Google and MSN/Live; but I lost interest before I implemented Yahoo support. Link comes with source; exe in the bin folder.

    It works by downloading the actual search pages in question and finding any URL with your domain in it. It won't pass Google your cookies, but it will probably connect to the same Google server you do.

    From tsilb

What is the best way to move files from one server to another with PHP?

If I setup a CRON that runs a PHP script that in turn moves a file from one server to another, what would be the best way? Assume I have been given the proper username/password , and the protocol (like SFTP) is only prohibited if the language can't support it. I'm really open to options here -- these are XML files that hold order export and customer export (non-sensitive) information, and the jobs will run daily. There is the potential that one server is Linux and the other is Windows -- both are on different networks.

  • Why not use shell_exec and scp?

    <?php
        $output = shell_exec('scp file1.txt dvader@deathstar.com:somedir');
        echo "<pre>$output</pre>";
    ?>
    
    Bob Fanger : scp is a very handy and powerful tool, but may require some configuration: http://www.google.com/search?q=+password-less+SSH+login
    From SoloBold
  • If both servers would be on Linux you could use rsync for any kind of files (php, xml, html, binary, etc). Even if one of them will be Windows there are rsync ports to Windows.

    SoloBold : rsync's good too.
  • Why not try using PHP's FTP functions?

    Then you could do something like:

    // open some file for reading
    $file = 'somefile.txt';
    $fp = fopen($file, 'r');
    
    // set up basic connection
    $conn_id = ftp_connect($ftp_server);
    
    // login with username and password
    $login_result = ftp_login($conn_id, $ftp_user_name, $ftp_user_pass);
    
    // try to upload $file
    if (ftp_fput($conn_id, $file, $fp, FTP_ASCII)) {
        echo "Successfully uploaded $file\n";
    } else {
        echo "There was a problem while uploading $file\n";
    }
    
    // close the connection and the file handler
    ftp_close($conn_id);
    fclose($fp);
    
    From Stephen

From Child instance call base class method that was overridden

Consider the following code:

Public Class Animal

Public Overridable Function Speak() As String
    Return "Hello"
End Function

End Class

Public Class Dog
    Inherits Animal

    Public Overrides Function Speak() As String
        Return "Ruff"
    End Function

End Class

Dim dog As New Dog
Dim animal As Animal
animal = CType(dog, Animal)
// Want "Hello", getting "Ruff"
animal.Speak()

How can I convert/ctype the instance of Dog to Animal and have Animal.Speak get called?

  • you don't; the subclass's method overrides the superclass's method, by definition of inheritance

    if you want the overridden method to be available, expose it in the subclass, e.g.

    Public Class Dog 
        Inherits Animal
        Public Overrides Function Speak() As String
            Return "Ruff"
        End Function
        Public Function SpeakAsAnimal() As String
            Return MyBase.Speak()
        End Function
    End Class
    

    (untested)

  • I don't think you can.

    The thing is that the object is still a dog. the behavior you're describing (getting "ruff" from the casted object rather than "hello") is standard because you want to be able to use the animal class to let a bunch of different type of animals speak.

    For example, if you had a third class as thus:

    Public Class Cat
        Inherits Animal
    
        Public Overrides Function Speak() As String
            Return "Meow"
        End Function
    End Class
    

    Then you'd be able to access them like thus:

    protected sub Something
        Dim oCat as New Cat
        Dim oDog as New Dog
    
        MakeSpeak(oCat)
        MakeSpeak(oDog)
    End sub
    
    protected sub MakeSpeak(ani as animal)
        Console.WriteLine(ani.Speak())
    end sub
    

    What you're talking about doing basically breaks the inheritance chain. Now, this can be done, by setting up the Speak function to accept a parameter which tells it to return it's base value or not or a separate SPEAK function for the base value, but out of the box, you're not going to get things that behave this way.

    Mike Deck : "Another alternative, is to declare the object as an ANIMAL, and then cast it to a DOG when you need the dog's extended properties." That last part is not true. The declared type of the variable is meaningless when it comes to how a given instance behaves polymorphically.
    Stephen Wrighton : That's true, I was thinking in terms of interfaces.
  • I would ask why you are trying to get this type of behavior. It seems to me that the fact you need to invoke the parent class' implementation of a method is an indication that you have a design flaw somewhere else in the system.

    Bottom line though, as others have stated there is no way to invoke the parent class' implementation given the way you've structured your classes. Now within the Dog class you could call

    MyBase.Speak()
    

    which would invoke the parent class' implementation, but from outside the Dog class there's no way to do it.

    From Mike Deck
  • I think if you drop "Overridable" and change "Overrides" to "New" you'll get what you want.

    Public Class Animal
    
    Public Function Speak() As String
        Return "Hello"
    End Function
    
    End Class
    
    Public Class Dog
        Inherits Animal
    
        Public New Function Speak() As String
            Return "Ruff"
        End Function
    
    End Class
    
    Dim dog As New Dog
    Dim animal As Animal
    dog.Speak() ' should be "Ruff"
    animal = CType(dog, Animal)
    animal.Speak() ' should be "Hello"
    
    From Matt Burke

Best way to reverse-engineer a web service interface from a WSDL file?

I've inherited a WSDL file for a web service on a system that I don't have access to for development and testing.

I need to generate a web service that adheres to that WSDL. The wrapper is .NET, but if there's an easy way to do this with another platform, we might be able to look at that. The production web service is Java-based.

What's the best way to go about doing this?

Note: The inherited wsdl doesn't appear to be compatible with wsdl.exe because it doesn't conform to WS-I Basic Profile v1.1. In particular, the group that passed it on mentioned it uses another standard that the Microsoft tool doesn't support, but they didn't clarify. The error is related to a required 'name' field:

Error: Element Reference '{namespace}/:viewDocumentResponse' declared in
schema type '' from namespace ''
       - the required attribute 'name' is missing

For clarity's sake, I understand that I can easily create a .NET wrapper class from the WSDL file, but that's not what I need. It's like this:

Update: The original web service was created using Axis.

Diagram of system showing unavailable web service and mock web service

  • You may find useful the command line utility wsdl.exe of .NET by using the /serverInterface option. According to the documentation:

    Generates interfaces for server-side implementation of an ASP.NET Web Service. An interface is generated for each binding in the WSDL document(s). The WSDL alone implements the WSDL contract (classes that implement the interface should not include either of the following on the class methods: Web Service attributes or Serialization attributes that change the WSDL contract). Short form is '/si'.

    paulwhit : this helps. although the inherited wsdl doesn't appear to be compatible with it because it doesn't conform to WS-I Basic Profile v1.1.
    Panos : If you are getting just warnings and the code is generated, probably you can use it without problem. Wsdl.exe always tries to generate Basic Profile compliant code unless there is something in WSDL that is not compatible.
    Panos : Moreover, if you build the ws client proxy from visual studio you probably received similar warnings. You can create a client proxy with wsdl.exe and compare it with the one that you have in your project.
    paulwhit : it's a stop error; no code's generated :( I'll add more info to the question.
    From Panos
  • Try mock the wrapper interface using RhinoMocks and StructureMap .

    paulwhit : Does this handle the generation of web services? I can't seem to find that information readily available, and it doesn't look applicable on the surface. I meant "mock" in the generic sense, but maybe it's still applicable?
    From IceHeat
  • Not sure if this will help,

    what i've done recently is:

    • Generate .cs file using the wsdl tool or visual studio
    • I've changed it to be a partial class
    • I've created another partial class, in which all it does is add a line to say that the class implements IWhatEver
    • I've created an interface that is the same as the generated proxy class (Therefore the the proxy fully implements the interface)

    Then i've used a Mocking framework (Moq) in my case, to mock the WebService, I've then used poor mans dependancy injection (pass the mock into a constructor of the class under test) .. which can handle an instance of IWhatever

    Test away..

    Hope that helps

    paulwhit : I tried wsdl.exe and it's not compatible with the inherited wsdl because it doesn't follow the standard. :( After that, I'm not sure I need a mocking framework; couldn't I just change the proxy class URL setting?
    From danswain
  • We are using WSCF - Web Services Contract First tool from Thinktecture to do web service development creating XSD schema first and then generating service interfaces using this tool. It may be useful to generate service interfaces from WSDL but I have not tried this yet myself.

    paulwhit : That's pretty slick. Didn't work, but I can use this for other things! :)
    paulwhit : (Should clarify that it *does* work for my purpose, but I'm still dealing with a borked up WSDL that I think can only work on the original app that generated it (Axis). Trying that next)
    paulwhit : Accepting because I think this would be the best way except in my edge case with a messed up WSDL.
    Vlad N : I'm glad that it helped you somewhat. ;-)
    From Vlad N
  • Yes - you can use WSCF (as per above) to generate server side code. The actual URL can then be overwritten to point to the test URL that you want to use.

    However, this just generates a stub. You still have to code the actual e.g. GetCustomers() method which is somewhat suspect because you have no idea how the actual implementation works.

    Then you can either mock this or create a simple ASP web server to run it.

    From nzpcmad

"File Not Found" in MSBuild Community Tasks -- Which File?

I'm trying to use the VssGet task of the MSBuild Community Tasks, and the error message "File or project not found" is beating me with a stick. I can't figure out what in particular the error message is referring to. Here's the task:

<LocalFilePath Include="C:\Documents and Settings\michaelc\My Documents\Visual Studio 2005\Projects\Astronom\Astronom.sln" />

<VssGet DatabasePath="\\ofmapoly003\Individual\michaelc\VSS\Astronom_VSS\srcsafe.ini" 
     Path="$/Astronom_VSS" 
        LocalPath="@(LocalFilePath)" 
        UserName="build" Password="build" 
     Recursive="True" />

If I write a Streamreader to read to either the database path or the local path, it succeeds fine. So the path to everything appears to be accessible. Any ideas?

  • Two thoughts. One, sometimes a type load exception manifests as a FNF - let's hope that's not it. But if the code is actually being honest, you can track the problem using Procmon or Filemon. Start one of those utilities and then run your task again. You should be able to track down a record of a file that couldn't be located.

  • @famoushamsandwich that's a great response -- I had not previously heard of procmon or filemon. Tried procmon on the problem, but even after sifting through the relevant output (my gosh the machine does a lot more stuff behind the screen than I was aware of) I couldn't find where a file I'm referencing wasn't being found.

    dpurrington : You have to filter the results, either by process, or even restrict it to errors (although that doesn't always work correctly for me)
  • Procmon and Filemon are good suggestions - just make sure you filter the results to only show errors. Otherwise the success messages will bury the problem entries. Also, you can filter out processes that are not at fault (either through the filter dialog or by right-clicking the entry and choosing "Exclude Process".)

    A couple other thoughts:

    • In the LocalFilePath, you are specifying a single file as opposed to a folder. The task, on the other hand, specifies to get files recursively. Perhaps you need to remove "\Astronom.sln" from the LocalFilePath?
    • Is the build task being run under your account or another? It's possible you have a permissions issue
    • Do you already have a copy of the code pulled down in the same location? Perhaps there is a failure to overwrite an existing file/folder?
    From Pedro

Sharing code in respond_to blocks

I have the following before_filter:

  def find_current_membership
    respond_to do |wants|
      wants.html { @current_membership = @group.memberships.for(@current_user) }
      wants.rss  {}
      wants.js   { @current_membership = @group.memberships.for(@current_user) }
    end
  end

I would like to share the code for the HTML and JS blocks. Is there a better way than just throwing the code into a method? I was hoping this would work:

  def find_current_membership
    respond_to do |wants|
      wants.rss  {}
      wants.all  { @current_membership = @group.memberships.for(@current_user) }
    end
  end

But alas, it did not.

  • In this case you could probably do something like:

    before_filter :only => :find_current_membership do |c|
        load_current_membership if not request.format.rss?
    end
    

    Alternatively you could use the request.format.rss? in your controller method to conditionally load the memberships.

    Either way your first step should be to refactor that into a method.

    From jonnii
  • If I'm reading this right, it looks like find_current_membership is your before_filter method, is that right? eg:

    class SomeController < ApplicationController
      before_filter :find_current_membership
      ...
    

    I think it's a bit non-standard to use respond_to inside a before_filter, they are meant to just do something and render on failure. It seems to me like you want something more like this

        class SomeController < ApplicationController
          before_filter :find_current_membership
    
          def some_action
            # stuff, or maybe nothing
          end
    
       private
          def find_current_membership
             @current_membership = @group.memberships.for(@current_user) unless request.format.rss?
          end
       end
    
  • How about this simple solution!?

    def find_current_membership
      @current_membership = @group.memberships.for(@current_user)
      respond_to do |wants|
        wants.html
        wants.rss  {}
        wants.js
      end
    end
    
    From allesklar

What are the Conventional GEM PATHS for Ruby under OS X 10.5?

I have a performance problem with my ruby on my machine, which I think I have isolated to loading libraries (when #require is called), so I'm trying to work out whether ruby is searching too many folders for libraries.

When I run

$ gem environment
RubyGems Environment:
  - RUBYGEMS VERSION: 1.3.0
  - RUBY VERSION: 1.8.6 (2008-03-03 patchlevel 114) [universal-darwin9.0]
  - INSTALLATION DIRECTORY: /Library/Ruby/Gems/1.8
  - RUBY EXECUTABLE: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby
  - EXECUTABLE DIRECTORY: /usr/bin
  - RUBYGEMS PLATFORMS:
    - ruby
    - universal-darwin-9
  - GEM PATHS:
     - /Library/Ruby/Gems/1.8
     - /Users/matt/.gem/ruby/1.8
     - /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8
  - GEM CONFIGURATION:
     - :update_sources => true
     - :verbose => true
     - :benchmark => false
     - :backtrace => false
     - :bulk_threshold => 1000
     - :sources => ["http://gems.rubyforge.org", "http://gems.github.com/"]
  - REMOTE SOURCES:
     - http://gems.rubyforge.org
     - http://gems.github.com/

There's nothing much on /Users/matt/.gem, but there's tons in both /Library/Ruby and in /System/Library/Frameworks/Ruby.framework.

What gives? Is this normal?

Thanks in advance, folks.

  • Yep. That all looks pretty standard to me. My mac running MacOS 10.5 similarly has nothing in ~/.gem/ruby/1.8/gems/ and quite a bit in the other two locations.

  • As Gabe mentioned, yes, this is normal.

    A little more info:

    /System/Library/Frameworks/Ruby.framework <-- used system wide for all users, usually owned by root. When you 'sudo gem install ...' the gem you're installing goes here...

    /Users/matt/.gem <-- user 'matt' has his own gem directory. every user gets one.

    When you just 'gem install' as 'matt' it will fall-back to your private gem dir. This gets created automatically the first time it's needed.

    From Eric Monti

What's the best way to to automate the back up of a couple folders in windows?

I've got a couple folders of personal scripts and such I'd rather not lose if the hard drive on my dev machine goes out (all the source is in SVN, and all the other important stuff is backed up elsewhere as well). What's the easiest way to back up certain folders to another local HD at something like 2AM every morning?

  • Genie Backup Manager is pretty good.

    http://www.genie-soft.com/

    Comodo is also ok, and free. Not qutie as reliable though, expecially with the scheduling:

    http://backup.comodo.com/download.html

    From Eli
  • xcopy and Scheduled task

    If you need something more user friendly or more features, you might like to take a look at MS SyncToy

    From AtliB
  • I'd use SyncBack by 2BrightSparks. I've just checked out the freeware version I've got installed on my machine, and it does support scheduling. I use SyncBack for all my backups at the moment, and it works great. The pay-for versions give you many more options including using full regular expressions to specify files/folders, but the freeware version has been adequate for me.

    From
  • Robocopy and and the Windows Task Scheduler. MS doesn't seem to offer a standalone copy of robocopy.exe. You can get it as part of the Windows Server 2003 Resource Kit Tools.

    I recently learned that it is one of the standard command line tools in Windows Vista and Server 2008.

    From raven
  • Use Mozy. Its a freebie upload-to-the-internet thing that works really well. You can encrypt the files using your own key too, which is nice.

    Always backup your precious files off-site, to another local harddrive is not going to help you any if you come home to find you've been visited by burglers.

    If you're worried about HDD failure, get Acronis of Ghost and backup the entire drive - replacing Windows isn't something I enjoy doing.

    From gbjbaanb

Declarative and programmatic SWFLoaders

What's the difference in terms of security between declarative and programmatic SWFLoaders? In the ff. code, loader1 throws a security exception while loader2 does not.

public someFunction(source:String):void
{
  var loader1:SWFLoader = new SWFLoader();
  loader1.load(source);

  loader2.source = source;
}

...

<mx:SWFLoader id="loader2"/>
  • I don't think there is any difference in terms of security. Remember, MXML gets converted to ActionScript by the mxmlc compiler before the actual compilation commences, so a declarative SWFLoader (or any other declarative element, for that matter) is just a short-hand way of creating something instead of coding it by hand. You can use the -compiler.keep-generated-actionscript mxmlc argument to see what kind of code gets generated from your MXML.

    And the reason you're not seeing a runtime error from the loader2.source = source; line in that example is that since the previous line of code invokes an error, the execution of that function stops there. Try commenting out the line where you call loader1.load(source) and you'll see the next line throw this kind of a SecurityError:

    SecurityError: Error #2148: SWF file http://example.com/test.swf cannot access local resource file:///Users/username/Desktop/picture.jpg. Only local-with-filesystem and trusted local SWF files may access local resources.
        at flash.display::Loader/_load()
        at flash.display::Loader/load()
        at mx.controls::SWFLoader/loadContent()
        at mx.controls::SWFLoader/load()
        at mx.controls::SWFLoader/commitProperties()
        at mx.core::UIComponent/validateProperties()
        at mx.managers::LayoutManager/validateProperties()
        at mx.managers::LayoutManager/doPhasedInstantiation()
        at Function/http://adobe.com/AS3/2006/builtin::apply()
        at mx.core::UIComponent/callLaterDispatcher2()
        at mx.core::UIComponent/callLaterDispatcher()
    
    From hasseg

OCR resources online

Are there any guides/tutorials to learning to "read" a text from an image ? I would appreciate something that would explain it "for dummies" . Step by step would be nice :)

  • I have had some success with the google lib tesseract and the python bindings for it.

    From Andrew Cox

What's the Best way to Generate a Dynamic XML for Web Service?

For my web service component, I need to generate a relatively large XML (~500 lines) according to many factors. I am facing a few different choices here: 1. StringBuilder 2. XmlWriter class 3. C# object with serialization???

Which one should I use. Is there any other ways that I am not aware of?

  • I generate an RSS feed very simply using LINQ to XML. It's the nicest XML API I know of, to be honest.

    I have a a couple of extension methods which I use to make it even easier - it converts from an anonymous type to either elements or attributes:

      public static IEnumerable<XElement> AsXElements(this object source)
      {
          foreach (PropertyInfo prop in source.GetType().GetProperties())
          {
              object value = prop.GetValue(source, null);
              yield return new XElement(prop.Name.Replace("_", "-"), value);
          }
      }
    
      public static IEnumerable<XAttribute> AsXAttributes(this object source)
      {
          foreach (PropertyInfo prop in source.GetType().GetProperties())
          {
              object value = prop.GetValue(source, null);
              yield return new XAttribute(prop.Name.Replace("_", "-"), value ?? "");
          }
      }
    

    That may not be at all appropriate for you, but I find it really handy. Of course, this assumes you're using .NET 3.5...

    From Jon Skeet
  • If you populate the XML with data from database, you can generate the whole XML by using SQL query and create a class with a property holds the XML blob. The property type can be XElement. This is the easiest I can think of.

    From codemeit
  • Need more info, but I would not use Object serialization. It's quite rigid and hides too much of the implementation. Especially when consumed by somebody other than your own application. I would also not use a StringBuilder because all of a sudden you are handling the escaping of content and doing all the hard and error-prone work yourself.

    For low level stuff, XmlWriter is a good way to go. If you're Linqing, then the XElement stuff is pretty nice.

    From Keltex

AJAX console window with ANSI/VT100 support?

I'm planning to write gateway web application, which would need "terminal window" with VT100/ANSI escape code support. Are there any AJAX based alternatives for such a task?

I'm thinking something like this: http://tryruby.hobix.com/

My preferred backend for the system is Python/Twisted/Pylons, but since I'm just planning, I will explore every option.

does YUI have selectors like in jQuery?

Does YUI have selector methods like jQuery?

e.g. get me all div's that are children of that have links in them?

Migrating to a GUI without losing business logic written in COBOL

We maintain a system that has over a million lines of COBOL code. Does someone have suggestions about how to migrate to a GUI (probably Windows based) without losing all the business logic we have written in COBOL? And yes, some of the business logic is buried inside the current user interface.

  • Writing a screen scraper is probably your best bet. Some of the major ERP systems have done this for years during a transition from server based apps to 3-tier applications. One i have worked with had loads of interesting features such as drop down lists for regularly used fields, date pop ups and even client based macro languages based on the scraping input.

    These weren't great but worked well for the clients and made sure the applications still worked in a reliable fashion.

    There is a lot of different ways to put this together, but if you put some thought into it you could probably use java or .net to create a desktop based application and with a little extra effort make a web based implementation.

    From Mark Nold
  • If it was me I would look into something like this:

    NetCobol for Windows

    It should be fairly easy to wrap your COBOL with an interface that exposes the functionality (if it isn't already written that way) and then call it from a .NET application.

    It took us about 15 years to get off of our mainframe, because we didn't do something like this.

    From bruceatk
  • Editors: the tag "busines-logic" is misspelled. (I don't know how to edit that, or else don't have enough reputation yet.)

  • Thanks Nathan for the spelling correction. I must confess I'm a developer not an editor.

    From Thayne
  • Microfocus provide a tool called Enterprise Server which allows COBOL to interact with web services.

    If you have a COBOL program A and another COBOL program B and A calls B via the interface section, the tool allows you to expose B's interface section as a web service.

    For program A, you then generate a client proxy and A can now call B via a web service.

    Of course, because B now has a web service any other type of program (command line, Windows application, Java, ASP etc.) can now also call it.

    Using this approach, you can "nibble away at the edges" to move the GUI to a modern, browser based approach using something like ASP while still utilising the COBOL business engine.

    And once you have a decent set of web services, these can be used for any new development which provides a way of moving away from COBOL in the longer term.

    From nzpcmad
  • You could use an ESB to expose the back-end legacy services, and then code your GUI to invoke the services via the ESB.

    Then you can begin replacing the legacy services with implementations on your new platform of choice.
    The GUI need not be aware of the cut-over of back-end service implementation, as long as the interface to the service does not change - minor changes may hidden from the GUI by the ESB.

    Business logic that resides in the legacy user interface layer will need to be refactored by extracting the business logic and exposing it as new services on the new platform to be consumed by the new GUI via the ESB.

    As for the choice of platform for the new GUI, why not consider a web-based UI rather than a native windows platform, then at least updates to the UI will only need to be applied to the web-server rather than having to roll-out changes to each individual work-station.

    From crowne

NHibernate one-to-one mapping where second table data can be null.

I have an existing database with the table Transactions in it. I have added a new table called TransactionSequence where each transaction will ultimately have only one record. We are using the sequence table to count transactions for a given account. I have mapped this as a one-to-one mapping where TransactionSequence has a primary key of TransactionId.

The constraint is that there is an instead of trigger on the transaction table does not allow updates of cancelled or posted transactions.

So, when the sequence is calculated and the transaction is saved, NHibernate tries to send an update on the transaction like 'UPDATE Transaction SET TransactionId = ? WHERE TransactionId = ?'. But this fails because of the trigger. How can I configure my mapping so that NHibernate will not try to update the Transaction table when a new TransactionSequence table is inserted?

Transaction mapping:

<class name="Transaction" table="Transaction" dynamic-update="true" select-before-update="true">
    <id name="Id" column="ID">
        <generator class="native" />
    </id>

 <property name="TransactionTypeId" access="field.camelcase-underscore" />
 <property name="TransactionStatusId" column="DebitDebitStatus" access="field.camelcase-underscore" />

    <one-to-one name="Sequence" class="TransactionSequence" fetch="join"
                 lazy="false" constrained="false">      
    </one-to-one>
</class>

And the sequence mapping:

<class name="TransactionSequence" table="TransactionSequence" dynamic-update="true">
    <id name="TransactionId" column="TransactionID" type="Int32">
        <generator class="foreign">
            <param name="property">Transaction</param>
        </generator>
    </id>

    <version name="Version" column="Version" unsaved-value="-1" access="field.camelcase-underscore" />

 <property name="SequenceNumber" not-null="true" />

    <one-to-one name="Transaction" 
                class="Transaction" 
                constrained="true" 
                foreign-key="fk_Transaction_Sequence" />

</class>

Any help would be greatly appreciated...

  • One to one mapping in nhibernate doesn't work the way you think it does. It's designed so that you have two classes, which when persisted to their corresponding tables have the same primary keys.

    However you can make it work, but it's not pretty. I'll show you how then offer up some alternatives:

    In your Transaction hbml:

    <one-to-one name="Sequence" class="TransactionSequence" property-ref="Transaction"/>
    

    In your Sequence html:

    <many-to-one name="Transaction" class="Transaction" column="fk_Transaction_Sequence" />
    

    This should do what you want it to do. Note the property-ref.

    The next question you're going to post on is going to ask how you get lazy loading on one-to-one associations. The answer is, you can't... well you can, but it probably won't work. The problem is that you have your foreign key on the sequence table, which means that nhibernate has to hit the database to see if the target exists. Then you can try playing around with constrained="true/false" to see if you can persuade it to lazily load the one-to-one association.

    All in all, it's going to result in a total waste of your time.

    I suggest either:

    1. Have two many-to-one associations.
    2. Have a many-to-one association with a collection on the other end.

    This will save you a lot of headaches in the long run.

    From jonnii
  • Turns out that for my situation a <join table> mapping worked best. I just had to make sure that I made the properties that came from the second table were nullable types, or it would do an insert on save even if nothing had changed. Since I did not need lazy loading for the second table, this works great. I am sure that I could have gotten paired many-to-one mappings to work, but it was not intuitive and seems more complicated than the join table option, however <join table> is only available in NHibernate 2.0 and up.

UPDATE statement in Oracle using SQL or PL/SQL to update first duplicate row ONLY

Hi,

I'm looking for an UPDATE statement where it will update a single duplicate row only and remain the rest (duplicate rows) intact as is, using ROWID or something else or other elements to utilize in Oracle SQL or PL/SQL?

Here is an example duptest table to work with:

CREATE TABLE duptest (ID VARCHAR2(5), NONID VARCHAR2(5));

  • run one INSERT INTO duptest VALUES('1','a');

  • run four (4) times INSERT INTO duptest VALUES('2','b');

Also, the first duplicate row has to be updated (not deleted), always, whereas the other three (3) have to be remained as is!

Thanks a lot, Val.

  • Will this work for you:

    update duptest 
    set nonid = 'c'
    WHERE ROWID IN (SELECT   MIN (ROWID)
                                  FROM duptest 
                              GROUP BY id, nonid)
    
  • This worked for me, even for repeated runs.

    --third, update the one row
    UPDATE DUPTEST DT
    SET DT.NONID = 'c'
    WHERE (DT.ID,DT.ROWID) IN(
                             --second, find the row id of the first dup
                             SELECT 
                               DT.ID
                              ,MIN(DT.ROWID) AS FIRST_ROW_ID
                             FROM DUPTEST DT
                             WHERE ID IN(
                                        --first, find the dups
                                        SELECT ID
                                        FROM DUPTEST
                                        GROUP BY ID
                                        HAVING COUNT(*) > 1
                                        )
                             GROUP BY
                               DT.ID
                             )
    
  • I think this should work.

    UPDATE DUPTEST SET NONID = 'C'
    WHERE ROWID in (
        Select ROWID from (
            SELECT ROWID, Row_Number() over (Partition By ID, NONID order by ID) rn
        ) WHERE rn = 1
    )
    
  • I know that this does not answer your initial question, but there is no key on your table and the problem you have adressing a specific row results from that.

    So my suggestion - if the specific application allows for it - would be to add a key column to your table (e.g. REAL_ID as INTEGER).

    Then you could find out the lowest id for the duplicates

    select min (real_id) 
    from duptest
    group by (id, nonid)
    

    and update just these rows:

    update duptest
    set nonid = 'C'
    where real_id in (<select from above>)
    

    I'm sure the update statement can be tuned somewhat, but I hope it illustrates the idea.

    The advantage is a "cleaner" design (your id column is not really an id), and a more portable solution than relying on the DB-specific versions of rowid.

    From IronGoofy
  • Kogus,

    You are the boss!!!

    I’ve tried so many options, variations, here and there and your’s was the key, the bull's eye!

    Thank you very much!!!

    Philadelphia PHILLIES - 2008 WORLD CHAMPIANS!!!

    Jaap Coomans : So... flag his answer as the accepted solution
  • UPDATE duptest SET nonid = 'c' WHERE nonid = 'b' AND rowid = (SELECT min(rowid) FROM duptest WHERE nonid = 'b');

  • JosephStyons, Tons of Thanks to You.Your solution worked for me after 3hrs battle..