Thursday, April 14, 2011

TreeViewItem.Header with Grid inside

I'm trying to make "Img" appear in the end of TreeViewItem.Header (as close to the right side of TreeView control), but no mater what I try header wide is always less than TreeView size and ofcourse "Img" appear somewhere in the middle of the control. This probably a very newbish question; I'm just starting to learn WPF.

<TreeView Grid.Row="1" Grid.ColumnSpan="2" Margin="3,3,3,3" Name="treeView1" Width="300">
    <TreeViewItem HorizontalAlignment="Stretch">
        <TreeViewItem.Header>
            <Grid HorizontalAlignment="Stretch">
                <Grid.RowDefinitions>
                    <RowDefinition  />
                </Grid.RowDefinitions>
                <Grid.ColumnDefinitions>
                    <ColumnDefinition />
                    <ColumnDefinition Width="30" />
                </Grid.ColumnDefinitions>

                <Label Grid.Column="0" Grid.Row="0">General</Label>
                <Label Grid.Column="1" Grid.Row="0">Img</Label>
            </Grid>
        </TreeViewItem.Header>
    </TreeViewItem>
</TreeView>
From stackoverflow
  • To achieve that you need to change the Control template of the TreeviewItem using the ItemContainerStyle of the TreeView (this is the style that gets applied to any item in the root of the treeview).

    The default TreeViewItem is not stretched, so it does not extend all the way to the right. When you set the Header, it is inside the TreeViewItem and so cannot extend past it.

    I will not post the whole style because it would be way too long.

    Here's what to do in Blend: select your TreeViewItem, right click and chose "Edit Control Parts/Edit a copy". Save the style wherever you want.

    Now, in the template, expand the stuff and locate the "Bd" element, which is a border. Change its RowSpan property to "2".

    Last, set the "HorizontalContentAlignment" property of your item to "Stretch" (either on the item or through the style if you need to apply that to several nodes).

    Your item should now be the correct width. Now, this only applies to the item you selected. If you want that to work for any item you add to the treeview, you need to change the "ItemContainerStyle" of the Treeview to the newly created style, and remove the style that Blend placed on the TreeViewItem.

    Last but not least, you need to set the ItemContainerStyle of your TreeViewItem to that same style so that its children also extend all the way, and so on and so forth.

    So in the end, with your example and a child node on the first item:

    <Grid x:Name="LayoutRoot">
    <TreeView Margin="3,3,3,3" Name="treeView1" Width="300" ItemContainerStyle="{DynamicResource TreeViewItemStyle1}">
    <TreeViewItem HorizontalAlignment="Stretch" HorizontalContentAlignment="Stretch" ItemContainerStyle="{DynamicResource TreeViewItemStyle1}">
        <TreeViewItem.Header>
            <Grid HorizontalAlignment="Stretch">
                <Grid.RowDefinitions>
                    <RowDefinition  />
                </Grid.RowDefinitions>
                <Grid.ColumnDefinitions>
                    <ColumnDefinition />
                    <ColumnDefinition Width="30" />
                </Grid.ColumnDefinitions>
    
                <Label Grid.Column="0" Grid.Row="0">General</Label>
                <Label Grid.Column="1" Grid.Row="0">Img</Label>
            </Grid>
        </TreeViewItem.Header>
        <TreeViewItem>
      <TreeViewItem.Header>
            <Grid HorizontalAlignment="Stretch">
                <Grid.RowDefinitions>
                    <RowDefinition  />
                </Grid.RowDefinitions>
                <Grid.ColumnDefinitions>
                    <ColumnDefinition />
                    <ColumnDefinition Width="30" />
                </Grid.ColumnDefinitions>
    
                <Label Grid.Column="0" Grid.Row="0">General</Label>
                <Label Grid.Column="1" Grid.Row="0">Img</Label>
            </Grid>
        </TreeViewItem.Header>
    </TreeViewItem>
    </TreeViewItem>
    

    The "TreeViewItemStyle1" is the style that Blend created for you.

    EDIT

    as requested, here's the full style as generated by blend and modified. It is long because it basically is a copy of the built-in style with minor modifications.

    <Style x:Key="TreeViewItemFocusVisual">
          <Setter Property="Control.Template">
           <Setter.Value>
            <ControlTemplate>
             <Rectangle/>
            </ControlTemplate>
           </Setter.Value>
          </Setter>
         </Style>
         <PathGeometry x:Key="TreeArrow" Figures="M0,0 L0,6 L6,0 z"/>
         <Style x:Key="ExpandCollapseToggleStyle" TargetType="{x:Type ToggleButton}">
          <Setter Property="Focusable" Value="False"/>
          <Setter Property="Width" Value="16"/>
          <Setter Property="Height" Value="16"/>
          <Setter Property="Template">
           <Setter.Value>
            <ControlTemplate TargetType="{x:Type ToggleButton}">
             <Border Width="16" Height="16" Background="Transparent" Padding="5,5,5,5">
              <Path Fill="Transparent" Stroke="#FF989898" x:Name="ExpandPath" Data="{StaticResource TreeArrow}">
               <Path.RenderTransform>
                <RotateTransform Angle="135" CenterX="3" CenterY="3"/>
               </Path.RenderTransform>
              </Path>
             </Border>
             <ControlTemplate.Triggers>
              <Trigger Property="IsMouseOver" Value="True">
               <Setter Property="Stroke" TargetName="ExpandPath" Value="#FF1BBBFA"/>
               <Setter Property="Fill" TargetName="ExpandPath" Value="Transparent"/>
              </Trigger>
              <Trigger Property="IsChecked" Value="True">
               <Setter Property="RenderTransform" TargetName="ExpandPath">
                <Setter.Value>
                 <RotateTransform Angle="180" CenterX="3" CenterY="3"/>
                </Setter.Value>
               </Setter>
               <Setter Property="Fill" TargetName="ExpandPath" Value="#FF595959"/>
               <Setter Property="Stroke" TargetName="ExpandPath" Value="#FF262626"/>
              </Trigger>
             </ControlTemplate.Triggers>
            </ControlTemplate>
           </Setter.Value>
          </Setter>
         </Style>
         <Style x:Key="TreeViewItemStyle1" TargetType="{x:Type TreeViewItem}">
          <Setter Property="Background" Value="Transparent"/>
          <Setter Property="HorizontalContentAlignment" Value="{Binding Path=HorizontalContentAlignment, RelativeSource={RelativeSource AncestorType={x:Type ItemsControl}}}"/>
          <Setter Property="VerticalContentAlignment" Value="{Binding Path=VerticalContentAlignment, RelativeSource={RelativeSource AncestorType={x:Type ItemsControl}}}"/>
          <Setter Property="Padding" Value="1,0,0,0"/>
          <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}"/>
          <Setter Property="FocusVisualStyle" Value="{StaticResource TreeViewItemFocusVisual}"/>
          <Setter Property="Template">
           <Setter.Value>
            <ControlTemplate TargetType="{x:Type TreeViewItem}">
             <Grid>
              <Grid.ColumnDefinitions>
               <ColumnDefinition MinWidth="19" Width="Auto"/>
               <ColumnDefinition Width="Auto"/>
               <ColumnDefinition Width="*"/>
              </Grid.ColumnDefinitions>
              <Grid.RowDefinitions>
               <RowDefinition Height="Auto"/>
               <RowDefinition/>
              </Grid.RowDefinitions>
              <ToggleButton x:Name="Expander" Style="{StaticResource ExpandCollapseToggleStyle}" ClickMode="Press" IsChecked="{Binding Path=IsExpanded, RelativeSource={RelativeSource TemplatedParent}}"/>
              <Border x:Name="Bd" SnapsToDevicePixels="true" Grid.Column="1" Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" Padding="{TemplateBinding Padding}" Grid.ColumnSpan="2">
               <ContentPresenter HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" x:Name="PART_Header" SnapsToDevicePixels="{TemplateBinding SnapsToDevicePixels}" ContentSource="Header"/>
              </Border>
              <ItemsPresenter x:Name="ItemsHost" Grid.Column="1" Grid.ColumnSpan="2" Grid.Row="1"/>
             </Grid>
             <ControlTemplate.Triggers>
              <Trigger Property="IsExpanded" Value="false">
               <Setter Property="Visibility" TargetName="ItemsHost" Value="Collapsed"/>
              </Trigger>
              <Trigger Property="HasItems" Value="false">
               <Setter Property="Visibility" TargetName="Expander" Value="Hidden"/>
              </Trigger>
              <Trigger Property="IsSelected" Value="true">
               <Setter Property="Background" TargetName="Bd" Value="{DynamicResource {x:Static SystemColors.HighlightBrushKey}}"/>
               <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.HighlightTextBrushKey}}"/>
              </Trigger>
              <MultiTrigger>
               <MultiTrigger.Conditions>
                <Condition Property="IsSelected" Value="true"/>
                <Condition Property="IsSelectionActive" Value="false"/>
               </MultiTrigger.Conditions>
               <Setter Property="Background" TargetName="Bd" Value="{DynamicResource {x:Static SystemColors.ControlBrushKey}}"/>
               <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.ControlTextBrushKey}}"/>
              </MultiTrigger>
              <Trigger Property="IsEnabled" Value="false">
               <Setter Property="Foreground" Value="{DynamicResource {x:Static SystemColors.GrayTextBrushKey}}"/>
              </Trigger>
             </ControlTemplate.Triggers>
            </ControlTemplate>
           </Setter.Value>
          </Setter>
         </Style>
    
    Sergej Andrejev : Could you post the style, or at least a setter. I don't find Bd element you are talking about
  • See this post for two samples. I just created these today.

    http://stackoverflow.com/questions/664632/highlight-whole-treeviewitem-line-in-wpf

    Sergej Andrejev : This was useful too, but Denis was first

Platform independent file locking?

I'm running a very computationally intensive scientific job that spits out results every now and then. The job is basically to just simulate the same thing a whole bunch of times, so it's divided among several computers, which use different OSes. I'd like to direct the output from all these instances to the same file, since all the computers can see the same filesystem via NFS/Samba. Here are the constraints:

  1. Must allow safe concurrent appends. Must block if some other instance on another computer is currently appending to the file.
  2. Performance does not count. I/O for each instance is only a few bytes per minute.
  3. Simplicity does count. The whole point of this (besides pure curiosity) is so I can stop having every instance write to a different file and manually merging these files together.
  4. Must not depend on the details of the filesystem. Must work with an unknown filesystem on an NFS or Samba mount.

The language I'm using is D, in case that matters. I've looked, there's nothing in the standard lib that seems to do this. Both D-specific and general, language-agnostic answers are fully acceptable and appreciated.

From stackoverflow
  • I don't know D, but I thing using a mutex file to do the jobe might work. Here's some pseudo-code you might find useful:

    do {
      // Try to create a new file to use as mutex.
      // If it's already created, it will throw some kind of error.
      mutex = create_file_for_writing('lock_file');
    } while (mutex == null);
    
    // Open your log file and write results
    log_file = open_file_for_reading('the_log_file');
    write(log_file, data);
    close_file(log_file);
    
    close_file(mutex);
    // Free mutex and allow other processes to create the same file.
    delete_file(mutex);
    

    So, all processes will try to create the mutex file but only the one who wins will be able to continue. Once you write your output, close and delete the mutex so other processes can do the same.

    CyberShadow : You must have missed the part where he said he needs synchronization between different computers.
    Jiri Klouda : And this solution will not work over NFS as he requested.
    Seb : Why wouldn´t this work? I don´t mean writing a file locally in each computer but in a single location for all of them.
  • Over NFS you face some problems with client side caching and stale data. I have written an OS independent lock module to work over NFS before. The simple idea of creating a [datafile].lock file does not work well over NFS. The basic idea to work around it is to create a lock file [datafile].lock which if present means file is NOT locked and a process that wants to acquire a lock renames the file to a different name like [datafile].lock.[hostname].[pid]. The rename is an atomic enough operation that works well enough over NFS to guarantee exclusivity of the lock. The rest is basically a bunch of fail safe, loops, error checking and lock retrieval in case the process dies before releasing the lock and renaming the lock file back to [datafile].lock

  • The classic solution is to use a lock file, or more accurately a lock directory. On all common OSs creating a directory is an atomic operation so the routine is:

    • try to create a lock directory with a fixed name in a fixed location
    • if the create failed, wait a second or so and try again - repeat until success
    • write your data to the real data file
    • delete the lock directory

    This has been used by applications such as CVS for many years across many platforms. The only problem occurs in the rare cases when your app crashes while writing and before removing the lock.

  • Lock File with a twist

    Like other answers have mentioned, the easiest method is to create a lock file in the same directory as the datafile.

    Since you want to be able to access the same file over multiple PC the best solution I can think of is to just include the identifier of the machine currently writing to the data file.

    So the sequence for writing to the data file would be:

    1. Check if there is a lock file present

    2. If there is a lock file, see if I'm the one owning it by checking that its content has my identifier.
      If that's the case, just write to the data file then delete the lock file.
      If that's not the case, just wait a second or a small random length of time and try the whole cycle again.

    3. If there is no lock file, create one with my identifier and try the whole cycle again to avoid race condition (re-check that the lock file is really mine).

    Along with the identifier, I would record a timestamp in the lock file and check whether it's older than a given timeout value.
    If the timestamp is too old, then assume that the lock file is stale and just delete it as it would mea one of the PC writing to the data file may have crashed or its connection may have been lost.

    Another solution

    If you are in control the format of the data file, could be to reserve a structure at the beginning of the file to record whether it is locked or not.
    If you just reserve a byte for this purpose, you could assume, for instance, that 00 would mean the data file isn't locked, and that other values would represent the identifier of the machine currently writing to it.

    Issues with NFS

    OK, I'm adding a few things because Jiri Klouda correctly pointed out that NFS uses client-side caching that will result in the actual lock file being in an undetermined state.

    A few ways to solve this issue:

    • mount the NFS directory with the noac or sync options. This is easy but doesn't completely guarantee data consistency between client and server though so there may still be issues although in your case it may be OK.

    • Open the lock file or data file using the O_DIRECT, the O_SYNC or O_DSYNC attributes. This is supposed to disable caching altogether.
      This will lower performance but will ensure consistency.

    • You may be able to use flock() to lock the data file but its implementation is spotty and you will need to check if your particular OS actually uses the NFS locking service. It may do nothing at all otherwise.
      If the data file is locked, then another client opening it for writing will fail.
      Oh yeah, and it doesn't seem to work on SMB shares, so it's probably best to just forget about it.

    • Don't use NFS and just use Samba instead: there is a good article on the subject and why NFS is probably not the best answer to your usage scenario.
      You will also find in this article various methods for locking files.

    • Jiri's solution is also a good one.

    Basically, if you want to keep things simple, don't use NFS for frequently-updated files that are shared amongst multiple machines.

    Something different

    Use a small database server to save your data into and bypass the NFS/SMB locking issues altogether or keep your current multiple data files system and just write a small utility to concatenate the results.
    It may still be the safest and simplest solution to your problem.

    Jiri Klouda : This solution, while working fine on single computer, will run into race conditions because of NFS client side caching.
    janneb : Note that NFSv4 fixes many of the problems with older versions of the protocol.
  • Why not just build a simple server which sits between the file and the other computers?

    Then if you ever wanted to change the data format, you would only have to modify the server, and not all of the clients.

    In my opinion building a server would be much easier than trying to use a Network file system.

    Jiri Klouda : Or just use a database and store the data in a proper database and locking problems solved.
    dsimcha : I don't have a database configured and I don't want to configure one just to solve such a simple problem.

Configuring transport security for WCF

I have a windows service that hosts a WCF service, and a webservice on a different machine acting as a client. I have the nettcpbinding set to Transport security using Windows authentication. Am I correct to assume that the windows user the webservice is running under must have permission to access the WCF service on the other machine? If the webservice is running under NetworkService, is it possible to use it or do i need to setup a new user for it to use?

From stackoverflow
  • See http://msdn.microsoft.com/en-us/library/ms684272(VS.85).aspx for good info on networkservice. What will happen is that your WCF client will attempt to authenticate as domain\computername$ to the machine hosting the service. I personally prefer to have a specific identity for auditing purposes.

    Jesse Weigert : It's better to run as network service because it doesn't require maintaining passwords on the network. Machine account passwords change every 30 days, network account passwords tend to expire under domain policies and need to be manually changed.
  • Yes, you'll need to setup another user. Network service is a local user and will not exist on the WCF hosting machine. (Well it does, but it's got a different password and so is not shared)

    You have a couple of choices - if both machines are in the domain you can run the web application pool as a domain user, or if you're in a workgroup you can create the same username/password combination on both machines and configure the web site to run under that account. In either case you need to assign the right privileges to the new account by issuing

    aspnet_regiis -ga MachineName\AccountName
    

    If you are in a domain and kerberos authentication then you will also need to setup an SPN for the new user account

    setspn -A HTTP/webservername domain\customAccountName
    setspn -A HTTP/webservername.fullyqualifieddomainname domain\customAccountName
    

Handling ObjectDisposedException correctly in an IDisposable class hierarchy

When implementing IDisposable correctly, most implementations, including the framework guidelines, suggest including a private bool disposed; member in order to safely allow multiple calls to Dispose(), Dispose(bool) as well as to throw ObjectDisposedException when appropriate.

This works fine for a single class. However, when you subclass from your disposable resource, and a subclass contains its own native resources and unique methods, things get a little bit tricky. Most samples show how to override Dipose(bool disposing) correctly, but do not go beyond that to handling ObjectDisposedException.

There are two questions that I have in this situation.


First:

The subclass and the base class both need to be able to track the state of disposal. There are a couple of main options I know of -

  • 1) Declare private bool disposed; in both classes. Each class tracks its own this.disposed, and throws as needed.

  • 2) Use protected bool Disposed { get; private set; } instead of a field. This would let the subclass check the disposed state.

  • 3) Provide some protected helper method to check the disposed state, and throw by pulling the current type name via reflection if the object is disposed.

The advantages as disadvantages I see to each by option are:

  • 1) This "smells" to me since it contains duplicated booleans, but seems to work fine. I often use this when subclassing other code.

  • 2) This takes out the duplicated booleans, but is not the way the design guidelines books are written, etc. This is what I typically use, though, since it keeps it a single point for state.

  • 3) This seems like the cleanest option to me, but doesn't appear in standard guidelines. It may be a little less expected of an approach than others from users of the class.

I, at one point or another, have tried using all three of these approaches. I would like to know advantages and disadvantages to the three approaches, as well as any other ideas for a cleaner, better way to handle this. What choice would you make in handling this, and why?


Second:

When throwing the ObjectDisposedException, what do you use for the name argument? I know the "typical" method call is:

throw new ObjectDisposedException(GetType().FullName);

There is a comment on this page from a Microsoft employee suggesting that implementing the concrete class's full name is the appropriate usage.

In the third option above, this would be the only meaningful choice. However, if the class implements the throwing itself, you could potentially return the name of the class that defines the method that was called. (ie: the base class could return the base class's name, not the concrete subclass)

I don't think this is a good idea - but I ran into this on some code written by somebody else. Are there advantages or disadvantages to having the name of the class implementing the method returned?

From stackoverflow
  • I typically implement the first option. Indeed, it seems to be what the design guidelines recommend. The reason isn't immediately apparent, but I consider it a good one nonetheless: any implementer of the class should have the same sort of protection against the case where the object is disposed as general consumers. In other words, it's best not to assume that whoever is implementing a derived class knows precisely when they can call a certain method, whose successful execution may or may not depend on whether the object has already been disposed or not (though ideally this should be documented via XML comments anyway).

    Regarding your second question, I would again stick with the recommended practice of passing GetType().FullName, especially since it's used in the core .NET framework. Even if you think alternative methods are more appropiate, I think it's best to stick to the method used in the .NET framework for the sake of consistency.

    To conclude: as with all guidelines, it's clearly up to you how you want to implement a certain design feature, though unless you have a particularly good reason it's highly advisable just to stick with them. In both these situations, it probably wouldn't do a great deal of harm to utilise some of the alternatives you suggested in your post, so long as they are used consistently and preferably documented to the user.

  • Declare private bool disposed; in both classes. Each class tracks its own this.disposed, and throws as needed.

    It is the practical solution when you are unable to modify the base class.

    Use protected bool Disposed { get; private set; } instead of a field. This would let the subclass check the disposed state.

    Why not make it public and call it IsDisposed instead? Then you would be doing the same thing as System.Windows.Forms.Control. This is a good solution when you can modify the base class.

    you could potentially return the name of the class that defines the method

    No. The example code you referenced used "GetType().FullName". This is always the name of the most derived type, not the type that implements the particular method.

    Reed Copsey : GetType().FullName will always return the concrete type. I was saying that if you implement bool disposed in each class, individually, you don't HAVE to do that - you can return something else (like the name of the class containing the method).

NSMutableArray as @property with readonly

Suppose I have something like this:

@property (readonly) NSMutableArray *someArray;

Can I modify [obj someArray] even though the @property is set to readonly?

From stackoverflow
  • Yes, you can modify its contents. The readonly only applies to the pointer itself - in that way, it is not like C++'s const.

    Basically, saying "readonly" just means "don't translate a.someArray = foo into [a setSomeArray:foo]". That is, no setter is created.

    (Of course, if you wanted to prevent modification, you'd just use an NSArray instead.)

    Matt Gallagher : You mean C's "const". Quick point about C... it depends what side of the asterisk the "const" is on. The readonly property here IS like NSMutableArray * const someArray; but NOT like const NSMutableArray *someArray; http://en.wikipedia.org/wiki/Const-correctness
    Jesse Rusak : @Matt - Good point.
  • The contents of someArray are modifiable, although the property is not (i.e. a call cannot change the value of the someArray instance variable by assigning to the property). Note, this is different from the semantics of C++'s const. If you want the array to be actually read-only (i.e. unmodifiable by the reader), you need to wrap it with a custom accessor. In the @interface (assuming your someArray property)

    @property (readonly) NSArray *readOnlyArray;
    

    and in the @implementation

    @dynamic readOnlyArray;
    
    + (NSSet*)keyPathsForValuesAffectingReadOnlyArray {
      return [NSSet setWithObject:@"someArray"];
    }
    - (NSArray*)readOnlyArray {
      return [[[self someArray] copy] autorelease];
    }
    

    Note that the caller will still be able to mutate the state of objects in the array. If you want to prevent that, you need to make them immutable on insertion or perform a depp-copy of the array in the readOnlyArray accessor.

MySQL FULLTEXT Search Across >1 Table

As a more general case of this question because I think it may be of interest to more people...What's the best way to perform a fulltext search on two tables? Assume there are three tables, one for programs (with submitter_id) and one each for tags and descriptions with object_id: foreign keys referring to records in programs. We want the submitter_id of programs with certain text in their tags OR descriptions. We have to use MATCH AGAINST for reasons that I won't go into here. Don't get hung up on that aspect.

programs
  id
  submitter_id
tags_programs
  object_id
  text
descriptions_programs
  object_id
  text

The following works and executes in a 20ms or so:

SELECT p.submitter_id
FROM programs p
WHERE p.id IN
    (SELECT t.object_id
    FROM titles_programs t
    WHERE MATCH (t.text) AGAINST ('china')
UNION ALL
    SELECT d.object_id
    FROM descriptions_programs d
    WHERE MATCH (d.text) AGAINST ('china'))

but I tried to rewrite this as a JOIN as follows and it runs for a very long time. I have to kill it after 60 seconds.

SELECT p.id 
FROM descriptions_programs d, tags_programs t, programs p
WHERE (d.object_id=p.id AND MATCH (d.text) AGAINST ('china'))
OR    (t.object_id=p.id AND MATCH (t.text) AGAINST ('china'))

Just out of curiosity I replaced the OR with AND. That also runs in s few milliseconds, but it's not what I need. What's wrong with the above second query? I can live with the UNION and subselects, but I'd like to understand.

From stackoverflow
  • Join after the filters (e.g. join the results), don't try to join and then filter.

    The reason is that you lose use of your fulltext index.

    Clarification in response to the comment: I'm using the word join generically here, not as JOIN but as a synonym for merge or combine.

    I'm essentially saying you should use the first (faster) query, or something like it. The reason it's faster is that each of the subqueries is sufficiently uncluttered that the db can use that table's full text index to do the select very quickly. Joining the two (presumably much smaller) result sets (with UNION) is also fast. This means the whole thing is fast.

    The slow version winds up walking through lots of data testing it to see if it's what you want, rather than quickly winnowing the data down and only searching through rows you are likely to actually want.

    Doug Kaye : Is the syntax for that any different than the first example?
    Doug Kaye : I don't follow, Markus. (a) How would you write 'join after the filters?' and (b) 'you lose use of your fulltext index.???
  • If you join both tables you end up having lots of records to inspect. Just as an example, if both tables have 100,000 records, fully joining them give you with 10,000,000,000 records (10 billion!).

    If you change the OR by AND, then you allow the engine to filter out all records from table descriptions_programs which doesn't match 'china', and only then joining with titles_programs.

    Anyway, that's not what you need, so I'd recommend sticking to the UNION way.

    Doug Kaye : Is that math correct? If I have 100,000 programs and each one has a title, why wouldn't the join of programs and tags yield just 100,000 rows? And if you also join 100,000 descriptions, don't you still have only 100,000 rows?
    Seb : If you want to match programs with titles, then match then in the join clause. If you just join them without any ON clause, then all rows are matched. Do something like FROM descriptions_programs d JOIN tags_programs t ON d.object_id = t.objecT_id JOIN programs p ON t.object_id = p.id
  • The union is the proper way to go. The join will pull in both full text indexes at once and can multiple the number of checks actually preformed.

  • Just in case you don't know: MySQL has a built in statement called EXPLAIN that can be used to see what's going on under the surface. There's a lot of articles about this, so I won't be going into any detail, but for each table it provides an estimate for the number of rows it will need to process. If you look at the "rows" column in the EXPLAIN result for the second query you'll probably see that the number of rows is quite large, and certainly a lot larger than from the first one.

    The net is full of warnings about using subqueries in MySQL, but it turns out that many times the developer is smarter than the MySQL optimizer. Filtering results in some manner before joining can cause major performance boosts in many cases.

Use reflection to set the value of a field in a struct which is part of an array of structs

At the moment my code successfully sets the value of fields/properties/arrays of an object using reflection given a path to the field/property from the root object.

e.g.

//MyObject.MySubProperty.MyProperty
SetValue('MySubProperty/MyProperty', 'new value', MyObject);

The above example would set 'MyProperty' property of the 'MyObject' object to 'new value'

I'm unable to use reflection to set a value of a field in a struct which is part of an array of structs because the struct is a value type (within an array).

Here are some test classes/structs...

public class MyClass {
        public MyStruct[] myStructArray = new MyStruct[] {
            new MyStruct() { myField = "change my value" } 
        };
        public MyStruct[] myOtherStructArray = new MyStruct[] {
            new MyStruct() { myOtherField = "change my value" }, 
            new MyStruct() { myOtherField = "change my other value" } 
        };
}

public struct MyStruct { public string myField; public string myOtherField; }

Below is how I successfully set the value of normal properties/fields and props/fields in lists...

public void SetValue(string pathToData, object newValue, object rootObject)
{
    object foundObject = rootObject;
    foreach (string element in pathToData.Split("/"))
    {
     foundObject = //If element is [Blah] then get the
                      //object at the specified list position
     //OR
        foundObject = //Else get the field/property
    }

    //Once found, set the value (this is the bit that doesn't work for
    //                           fields/properties in structs in arrays)
    FieldInf.SetValue(foundObject, newValue);
}

object myObject = new MyClass();
SetValue("/myStructArray/[0]/myField", "my new value", myObject);
SetValue("/myOtherStructArray/[1]/myOtherField", "my new value", myObject);

After that I want the myObject.myStructArray[0].myField = ''my new value" and myObject.myOtherStructArray[1].myOtherField = ''my new value"

All I need is a replacement for the 'FieldInf.SetValue(foundObject, newValue);' line

thanks in advance

From stackoverflow
  • If I had to guess, the bug is in part of the code you omitted, specifically I'd suspect that:

        foundObject = //If element is [Blah] then get the
                      //object at the specified list position
    

    is (unintentionally) setting foundObject to a copy of the object at the specified list position.

    Mark : Hi thans for answering, please see my feedback below...
  • My question continued...

    The only solution i found to a similar problem I had setting a field/property in a struct that is a field was to use...

    //GrandParentObject is myObject
    //GrandParentType is typeof(MyClass)
    //FieldIWantedToSet is the field info of myStruct.FieldIWantedToSet
    FieldInfo oFieldValueTypeInfo = GrandParentType.GetField("myStruct");
    TypedReference typedRefToValueType = TypedReference.MakeTypedReference(GrandParentObject, new FieldInfo[] { oFieldValueTypeInfo });
    FieldIWantedToSet.SetValueDirect(typedRefToValueType, "my new value");
    

    Problem is how can I use SetValueDirect on a array/list of structs, i'm guessing my old method above will not work when the structs are in an array because I cannot get the FieldInfo for the struct (because its in an array)?

  • Get the FieldInfo for the array object (not the specific element).

    If it's an array, cast it to a System.Array and use Array.SetValue to set the object's value.

    Mark : Thanks for your answer, i don't think that will work because the structure is as follows... MyObject.myStructArray[0].myField So using your method Array.SetValue
    Mark : ... would to pass in a brand new struct, i'm trying to set the value of a field in a struct thats in an array
    Reed Copsey : Yes. Whenever you have an array of structs, that's the best approach. You can copy it to a new, local struct, and only overwrite that member, then pass that back in.
    Reed Copsey : As a rule of thumb, though, typically structs should be immutable, so typically, you should avoid structs where one member can be changed. The design guidelines explain why in details...
    Mark : OK thanks. Looks like i'm catering for situations that are never going to occur in classes. PS Where are these design guidelines you speak of?
    Reed Copsey : Part of the design guidelines for .net frameworks is available on MSDN. There is a book written by 2 MS people that explains all the guidelines in detail, though. Is one I highly recommend.
    Reed Copsey : See http://www.amazon.com/Framework-Design-Guidelines-Conventions-Development/dp/0321545613/ref=sr_1_1?ie=UTF8&s=books&qid=1237751853&sr=8-1
    Mark : Cheers, i've got the book via work. Its very thorough and its good to dip into for a quick info fix while working. Any other good .NET books you could recommend?

SQL Server Batch Error Handling Problem.

How do I get this batch of SQL to get to the RollBack Transaction part at the end? SQL just stops halts script execution on the bad line of code. I know I can use a try/catch construct but i'm more interested in how this was this handled before SQL added try/catch.

BEGIN TRAN

CREATE TABLE TempTable (c1 INT NULL)

INSERT INTO TempTable (c1) SELECT 1

INSERT INTO TempTable (c1) SELECT 'ABS'

IF (@@ERROR = 0) 
BEGIN
    PRINT 'no error'
    COMMIT TRAN
END
    ELSE
BEGIN
    PRINT 'error' -- Why does it never get here???????
    ROLLBACK TRAN
END
From stackoverflow
  • In this case, your 'ABS' is batch aborting because it's a CAST error. Explanation here, at Erland Sommarskog's excellent article

    You have to read this article. More than you ever needed to know about SQL error handing.

    Also, you have to test each statement. If the first INSERT failed, you'd still carry on executing (unless you had XACT_ABORT ON.

    BEGIN TRAN
    
    CREATE TABLE TempTable (c1 INT NULL)
    
    INSERT INTO TempTable (c1) SELECT 1
    IF @@ERROR <> 0
        GOTO errhandler
    
    INSERT INTO TempTable (c1) SELECT 'ABS'
    IF @@ERROR <> 0
        GOTO errhandler
    
    PRINT 'no error'
    COMMIT TRAN
    GOTO exitpoint
    
    errhandler:
    PRINT 'error' -- Why does it never get here???????
    ROLLBACK TRAN
    
    exitpoint:
    

    If you have SQL Server 2000 then you don't have many options except to add more checks, ISNUMERIC etc.

    If you have SQL Server 2005, then you should really use the new techniques. Pretty much all code and execution errors are caught cleanly.

    BEGIN TRY
        BEGIN TRAN
    
        CREATE TABLE TempTable (c1 INT NULL)
    
        INSERT INTO TempTable (c1) SELECT 1
    
        INSERT INTO TempTable (c1) SELECT 'ABS'
    
        PRINT 'no error'
        COMMIT TRAN
    END TRY
    BEGIN CATCH
        PRINT 'error' --It will get here for SQL 2005
        ROLLBACK TRAN
    END CATCH
    
    John Sansom : +1: A good clear and concise answer.
    James : +1 Thanks for the thorough response!

Spring Controller destroy method?

Does Spring's Controller have any sort of destroy/cleanup method? I couldn't find anything in the JavaDocs for Controller and AbstractController. I'm looking for the equivalent of javax.servlet.Servlet's destroy() method.

The reason for this is that I'm starting a thread in my Spring controller. I want the thread to terminate whenever the controller is taken out of server (such as when the container is shutdown).

From stackoverflow

How can I start a flash video from javascript?

Is it possible to start playing a file inside a flash player by making use of javascript code? If so, how would I do it?

From stackoverflow
  • Try using swfObject, you can make any actionscript function visible for javascript using ExternalInterface and declaring them into javascript. So you can trigger actionscript function with play() (or any other code you want) from your javascript code.

    Here is an example:

    Actionscript:

    import flash.external.ExternalInterface;
    
    ExternalInterface.addCallback( "methodName", this, method );
    function method() {
       trace("called from javascript");
    }
    

    Javascript:

    function callAS() {
       swf.methodName(); 
    }
    

    Where methodName is the identifier js uses to call the method from actionscript.

  • Yes it is. You can reference the flash movie objects from js and control the flash component in a page. Unfortunately the way you do it is not portable across browsers. See this:

    http://www.permadi.com/tutorial/flashjscommand/

    jeffamaphone : See also: http://www.adobe.com/support/flash/publishexport/scriptingwithflash/
  • Take a look at SWFObject. There a lot of examples on how to accomplish that.

  • You can call any custom function in Flash from JavaScript, which requires you coding both Javascript and Flash.

    See here for some examples: http://kb.adobe.com/selfservice/viewContent.do?externalId=tn_15683.

    Also, using SwfObject helps a long way when dealing with Flash from JavaScript.

    Geo : does this mean that I'm not able to play it if the flash part wasn't written by me? I can just add JS code to the page, but not flash.
    totocaster : No, you can't control compiled flash movie from javascript without using ExternalInterface.
    Seb : @Geo: exactly. If you want to play Flash movies, you need to have complete control over its code and behavior. If it's a third-party movie, then you cannot control it (unless, of course, you decompile it and add some code to it, but that's something you never heard from me :P).
    Geo : My plans of world domination are ruined!
  • if any oneneed for nesseray for learning flash i will gave him more informaion about it..

    Hosam abuzarqa alhos232@hotmail.com

managing code changes from 3rd party API

I'm working with a 3rd party API that is distributed as source code. This is great because I can fix things on my own, but I also receive lots of updates from the 3rd party.

I have my own svn repository for my code base that includes my version of the API. I get the official API updates by checking out a version from the 3rd party's svn repository.

Merging the changes is a painful process. Is there a better way to do this? Would this be easier with a distributed source control system?

From stackoverflow
  • The problem is that you're coupling yourself to the API. Find a way to extend the API rather than modify it, if you can. Put it in a separate module or other compilation output (i.e. .JAR, .DLL, .SO file). Try to decouple yourself from it.

    If you can't do that, you're pretty much stuck dealing with a merge every release.

    Of course, if the API is open source, you might consider submitting your changes...

  • I don't think distributed version control systems will help much in your scenario as it seems you'll be the one doing the merging to your own repository. If the nature of the merge is the API and the code that depends on that API, nothing can help you much.

  • Abstract the API from your code. There are many design patterns you can use (facade, proxy, etc) that will help you keep your code insulated from changes in the API. If the API changes, change the code that interacts with it instead of the API itself. You'll still need to make changes to your code every time they update their code, but at least it will be minimal and isolated to one place.

Calling CreateProcessAsUser from C#

I've been attempting to create a new process under the context of a specific user using the CreateProcessAsUser function of the Windows API, but seem to be running into a rather nasty security issue...

Before I explain any further, here's the code I'm currently using to start the new process (a console process - PowerShell to be specific, though it shouldn't matter).

    private void StartProcess()
    {
        bool retValue;

        // Create startup info for new console process.
        var startupInfo = new STARTUPINFO();
        startupInfo.cb = Marshal.SizeOf(startupInfo);
        startupInfo.dwFlags = StartFlags.STARTF_USESHOWWINDOW;
        startupInfo.wShowWindow = _consoleVisible ? WindowShowStyle.Show : WindowShowStyle.Hide;
        startupInfo.lpTitle = this.ConsoleTitle ?? "Console";

        var procAttrs = new SECURITY_ATTRIBUTES();
        var threadAttrs = new SECURITY_ATTRIBUTES();
        procAttrs.nLength = Marshal.SizeOf(procAttrs);
        threadAttrs.nLength = Marshal.SizeOf(threadAttrs);

        // Log on user temporarily in order to start console process in its security context.
        var hUserToken = IntPtr.Zero;
        var hUserTokenDuplicate = IntPtr.Zero;
        var pEnvironmentBlock = IntPtr.Zero;
        var pNewEnvironmentBlock = IntPtr.Zero;

        if (!WinApi.LogonUser("UserName", null, "Password",
            LogonType.Interactive, LogonProvider.Default, out hUserToken))
            throw new Win32Exception(Marshal.GetLastWin32Error(), "Error logging on user.");

        var duplicateTokenAttrs = new SECURITY_ATTRIBUTES();
        duplicateTokenAttrs.nLength = Marshal.SizeOf(duplicateTokenAttrs);
        if (!WinApi.DuplicateTokenEx(hUserToken, 0, ref duplicateTokenAttrs,
            SECURITY_IMPERSONATION_LEVEL.SecurityImpersonation, TOKEN_TYPE.TokenPrimary,
            out hUserTokenDuplicate))
            throw new Win32Exception(Marshal.GetLastWin32Error(), "Error duplicating user token.");

        try
        {
            // Get block of environment vars for logged on user.
            if (!WinApi.CreateEnvironmentBlock(out pEnvironmentBlock, hUserToken, false))
                throw new Win32Exception(Marshal.GetLastWin32Error(),
                    "Error getting block of environment variables for user.");

            // Read block as array of strings, one per variable.
            var envVars = ReadEnvironmentVariables(pEnvironmentBlock);

            // Append custom environment variables to list.
            foreach (var var in this.EnvironmentVariables)
                envVars.Add(var.Key + "=" + var.Value);

            // Recreate environment block from array of variables.
            var newEnvironmentBlock = string.Join("\0", envVars.ToArray()) + "\0";
            pNewEnvironmentBlock = Marshal.StringToHGlobalUni(newEnvironmentBlock);

            // Start new console process.
            retValue = WinApi.CreateProcessAsUser(hUserTokenDuplicate, null, this.CommandLine,
                ref procAttrs, ref threadAttrs, false, CreationFlags.CREATE_NEW_CONSOLE |
                CreationFlags.CREATE_SUSPENDED | CreationFlags.CREATE_UNICODE_ENVIRONMENT,
                pNewEnvironmentBlock, null, ref startupInfo, out _processInfo);
            if (!retValue) throw new Win32Exception(Marshal.GetLastWin32Error(),
                "Unable to create new console process.");
        }
        catch
        {
            // Catch any exception thrown here so as to prevent any malicious program operating
            // within the security context of the logged in user.

            // Clean up.
            if (hUserToken != IntPtr.Zero)
            {
                WinApi.CloseHandle(hUserToken);
                hUserToken = IntPtr.Zero;
            }

            if (hUserTokenDuplicate != IntPtr.Zero)
            {
                WinApi.CloseHandle(hUserTokenDuplicate);
                hUserTokenDuplicate = IntPtr.Zero;
            }

            if (pEnvironmentBlock != IntPtr.Zero)
            {
                WinApi.DestroyEnvironmentBlock(pEnvironmentBlock);
                pEnvironmentBlock = IntPtr.Zero;
            }

            if (pNewEnvironmentBlock != IntPtr.Zero)
            {
                Marshal.FreeHGlobal(pNewEnvironmentBlock);
                pNewEnvironmentBlock = IntPtr.Zero;
            }

            throw;
        }
        finally
        {
            // Clean up.
            if (hUserToken != IntPtr.Zero)
                WinApi.CloseHandle(hUserToken);

            if (hUserTokenDuplicate != IntPtr.Zero)
                WinApi.CloseHandle(hUserTokenDuplicate);

            if (pEnvironmentBlock != IntPtr.Zero)
                WinApi.DestroyEnvironmentBlock(pEnvironmentBlock);

            if (pNewEnvironmentBlock != IntPtr.Zero)
                Marshal.FreeHGlobal(pNewEnvironmentBlock);
        }

        _process = Process.GetProcessById(_processInfo.dwProcessId);
    }

For the sake of the issue here, ignore the code dealing with the environment variables (I've tested that section independently and it seems to work.)

Now, the error I get is the following (thrown at the line following the call to CreateProcessAsUSer):

"A required privilege is not held by the client" (error code 1314)

(The error message was discovered by removing the message parameter from the Win32Exception constructor. Admittedly, my error handling code here may not be the best, but that's a somewhat irrelevant matter. You're welcome to comment on it if you wish, however.) I'm really quite confused as to the cause of this vague error in this situation. MSDN documentation and various forum threads have only given me so much advice, and especially given that the causes for such errors appear to be widely varied, I have no idea which section of code I need to modify. Perhaps it is simply a single parameter I need to change, but I could be making the wrong/not enough WinAPI calls for all I know. What confuses me greatly is that the previous version of the code that uses the plain CreateProcess function (equivalent except for the user token parameter) worked perfectly fine. As I understand, it is only necessary to call the Logon user function to receive the appropriate token handle and then duplicate it so that it can be passed to CreateProcessAsUser.

Any suggestions for modifications to the code as well as explanations would be very welcome.

Notes

I've been primarily referring to the MSDN docs (as well as PInvoke.net for the C# function/strut/enum declarations). The following pages in particular seem to have a lot of information in the Remarks sections, some of which may be important and eluding me:

Edit

I've just tried out Mitch's suggestion, but unfortunately the old error has just been replaced by a new one: "The system cannot find the file specified." (error code 2)

The previous call to CreateProcessAsUser was replaced with the following:

retValue = WinApi.CreateProcessWithTokenW(hUserToken, LogonFlags.WithProfile, null,
    this.CommandLine, CreationFlags.CREATE_NEW_CONSOLE |
    CreationFlags.CREATE_SUSPENDED | CreationFlags.CREATE_UNICODE_ENVIRONMENT,
    pNewEnvironmentBlock, null, ref startupInfo, out _processInfo);

Note that this code no longer uses the duplicate token but rather the original, as the MSDN docs appear to suggest.

And here's another attempt using CreateProcessWithLogonW. The error this time is "Logon failure: unknown user name or bad password" (error code 1326)

retValue = WinApi.CreateProcessWithLogonW("Alex", null, "password",
    LogonFlags.WithProfile, null, this.CommandLine,
    CreationFlags.CREATE_NEW_CONSOLE | CreationFlags.CREATE_SUSPENDED |
    CreationFlags.CREATE_UNICODE_ENVIRONMENT, pNewEnvironmentBlock,
    null, ref startupInfo, out _processInfo);

I've also tried specifying the username in UPN format ("Alex@Alex-PC") and passing the domain independently as the second argument, all to no avail (identical error).

From stackoverflow
  • From here:

    Typically, the process that calls the CreateProcessAsUser function must have the SE_ASSIGNPRIMARYTOKEN_NAME and SE_INCREASE_QUOTA_NAME privileges. If this function fails with ERROR_PRIVILEGE_NOT_HELD (1314), use the CreateProcessWithLogonW function instead. CreateProcessWithLogonW requires no special privileges, but the specified user account must be allowed to log on interactively. Generally, it is best to use CreateProcessWithLogonW to create a process with alternate credentials.

    See this blog post How to call CreateProcessWithLogonW & CreateProcessAsUser in .NET

    Noldorin : Thanks for pointing that out. Unfortunately, CreateProcessWithLogonW isn't the solution for me given that I need to execute this code from within a Windows Service too, and as pointed out by one of the commenters on that blog post, it isn't possible.
    Noldorin : (contd.) Maybe experimenting with some of the Local Security Policy rules will help, judging by the paragraph quoted in your post.
    Noldorin : Note: I've updated the original post having tried your proposed solution.
    Noldorin : Just noticed that I tried CreateProcessWithTokenW. No luck with CreateProcessAsLogonW: error 1326 (invalid logon credentials). Isn't this wonderful?... a completely unique error for each of the three functions.
    Mitch Wheat : @Noldorin: an error of "invalid logon credentials" seems fairly conclusive. Are you sure you have the Domain\username and password correct?
    Noldorin : @Mitch: I would have thought so too, but it seems not (or perhaps I'm missing something obvious). The program is simply running on my local dev computer, so there shouldn't be any complications with domain.
    Noldorin : (contd.) The following combinations of username/domain are tested and don't work: "Alex", null; "Alex", "Alex-PC"; "Alex@Alex-PC", null (error 1326, "Logon failure: unknown user name or bad password"). Note that I've also updated my original post with the code. Thanks for the help so far...
  • Ahh... seems liker I've been caught out by one of the biggest gotchas in WinAPI interop programming. Also, Posting the code for my function declarations would have been a wise idea in this case.

    Anyway, all that I needed to do was add an argument to the DllImport attribute of the function specifying CharSet = CharSet.Unicode. This did the trick for both the CreateProcessWithLogonW and CreateProcessWithTokenW functions. I guess it finally just hit me that the W suffix of the function names referred to Unicode and that I needed to explicitly specify this in C#! Here are the correct function declarations in case anyone is interested:

    [DllImport("advapi32", CharSet = CharSet.Unicode, SetLastError = true)]
    public static extern bool CreateProcessWithLogonW(string principal, string authority,
        string password, LogonFlags logonFlags, string appName, string cmdLine,
        CreationFlags creationFlags, IntPtr environmentBlock, string currentDirectory,
        ref STARTUPINFO startupInfo, out PROCESS_INFORMATION processInfo);
    
    [DllImport("advapi32", CharSet = CharSet.Unicode, SetLastError = true)]
    public static extern bool CreateProcessWithTokenW(IntPtr hToken, LogonFlags dwLogonFlags,
        string lpApplicationName, string lpCommandLine, CreationFlags dwCreationFlags,
        IntPtr lpEnvironment, string lpCurrentDirectory, [In] ref STARTUPINFO lpStartupInfo,
        out PROCESS_INFORMATION lpProcessInformation);
    
  • Jonathan Peppers provided this great piece of code that fixed my issues

    http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/0c0ca087-5e7b-4046-93cb-c7b3e48d0dfb?ppud=4

Table Spool/Eager Spool

I have a query

select * into NewTab from OpenQuery(rmtServer, 'select c1, c2 from rmtTab')

When I look at the execution plan, it tells me that it performs a 'Table Spool/Eager Spool' that 'stores the data in a temporary table to optimize rewinds'

Now I don't anticipate any rewinds. If there is a crash of some sort, I can just drop newTab and start over.

Is there any way I can stop it from storing the data in a temporary table?

From stackoverflow
  • create the NewTab first and then do insert into... from openquery.

  • It's probably the openquery causing it.

    There is no information on how many rows, no statistics, nothing so SQL Server will simply spool the results to allow it to evaluate the later bits I suspect. That's the basic idea.

    I'd suggest separating the creation and fill of newtab.

    By the way, rewind is not rollback. Rewind has nothing to do with the transaction safety. It's SQL Server anticipating reuse of the rows. Which is correct, because the openquery is a black box.

    Look near the bottom of this Simple Talk article for rewinds. You have a "Remote query".

    Edit

    Based one something I found last week only, look at sp_tableoption.

    When used with the OPENROWSET bulk rowset provider to import data into a table without indexes, TABLOCK enables multiple clients to concurrently load data into the target table with optimized logging and locking

    Try TABLOCK on your fill. We had some fun with a client developer using .net SQLBulkCopy giving very bad performance.

    Also this from, Kalen Delaney

    It's not intuitive.

    cindi : I tried seperating the creation and fill (also suggested by Mladen), but I got the same query plan. But that link is very interesting.
    cindi : I suppose my simplistic view of the query is that a remote query is going to return a table of information, the select/into is going to create a blank table and then just copy whatever rows it gets into the new table. So why is it doing this twice?

How to use PowerShell Get-Member cmdlet

A newbie question:

The command:

[Math] | Get-Member

Returns all members of System.RuntimeType. Why is that?

Also the command:

Get-Member -InputObject [Math]

Returns all members of System.String. If "[Math]" is interpreted as string here, how can I make it a math object?

Also, does Get-member takes any positional parameters? How can I tell?

From stackoverflow
  • Also, does Get-member takes any positional parameters? How can I tell?

    If the parameter name is wrapped in '[]' then the name is optional, so the parameter is positional. For example for Get-Member (definition below), Name is positional but InputObject is not.

    Get-Member [[-Name] ] [-Force] [-InputObject ] [-MemberType {AliasProperty | CodeProperty | Pro perty | NoteProperty | ScriptProperty | Properties | PropertySet | Method | CodeMethod | ScriptMethod | Methods | P arameterizedProperty | MemberSet | Event | All}] [-Static] [-View {Extended | Adapted | Base | All}] []

    For the 1st 2 questions, it seems like you expect them to behave like objects but you are entering a namespace/class. If you do "1 | gm" or "gm -inputobject 1" you will see more like what you want/expect.

  • You are getting a System.RuntimeType from [Math] because that is what it is. It's a Class type as opposed to an object of that particular type. You haven't actually constructed a [Math] object. You will get the same output if you typed:

    [String] | gm
    

    However, if you constructed a string object from the String type, you would get the string members:

    PS C:\> [String]("hi") | gm
    
    
       TypeName: System.String
    
    Name             MemberType            Definition
    ----             ----------            ----------
    Clone            Method                System.Object Clone()
    CompareTo        Method                System.Int32 CompareTo(Object value), System.Int32 CompareTo(String strB)
    Contains         Method                System.Boolean Contains(String value)
    CopyTo           Method                System.Void CopyTo(Int32 sourceIndex, Char[] destination, Int32 destinationIn...
    etc...
    

    Since System.Math has only static members, you can't construct an object of it. To see it's members you can use the GetMembers() function of System.RuntimeType:

    [Math].GetMethods()
    

    You can use one of the format-* cmdlets to format the output:

    [Math].GetMethods() | format-table
    

    Edit: Oh, and I should add, to invoke one of the static members, you would do it like this:

    [Math]::Cos(1.5)
    
  • I just wrote a blog post on exploring static members of classes with PowerShell, which might help.

    What is happening when you pipe [Math] to Get-Member, you are passing in an object of System.RunTimeType, and it does return the members of that type.

    There is a switch parameter for Get-Member which allows you to examine all the static members of a class:

    [Math] | get-member -static
    

    If you need to find instance members, you will need to create an instance of the class and pipe that to Get-Member.

Binding parameters to Windows Workflow instance & ignoring unused ones

I have a bunch of named value parameters in a Dictionary<string, object>, which I want to pass into different workflows. The catch is that each workflow will only need a subset of the properties in the dictionary, and I don't know beforehand which workflow needs which properties.

The problem is that when I call WorkflowRuntime.CreateWorkflow with the dictionary to bind with, it fails with:

The activity '<workflow name>' has no public writable property named '<property name>'

I know what this means. The property in the workflow is not defined because this particular workflow does not need that particular property (other workflows might).

Is there anyway to bind a dictionary to workflow properties, and IGNORE properties that are not defined on the workflow?

From stackoverflow
  • Why don't you pass your dictionary into the workflow instances? Your workflow definitions then just have to have a property for that dictionary.

    var inputs = Dictionary<string, YOUR_CUSTOM_TYPE>();
    // ...
    // fill your dictionary according to the context
    // ...
    var inputParams = new Dictionary<string, object>();
    inputParams["WF_PROP_NAME"] = inputs;
    var wfInstance = wfRuntime.CreateWorkflow(WF_TYPE, inputParams);
    

    This way your workflows just get the dictionary items of interest from the dictionary.