Monthly Archives: April 2009

Revisiting the passage of time via a tree

Back on 7/30/2005, I marked time by reporting a new tree plant. Until now, I hadn’t followed with how time has “grown” since.

Healthy Purple Robe Locust tree

When you compare the first original picture to the one above, it’s clear that the tree has grown quite nicely. In addition, you can see why this particular variety of tree was planted–for its beautiful color.

Detail of flowering Purple Robe Locust tree

I’m fortunate to have the same type of trees directly outside my office windows at work (three stories up).

Yes, you have to sweep up after the tree when its blooms fall to the ground. However, they’re not sticky, which puts the “downside” way above the downside of the rest of the sap-dropping city plants lining the rest of the street.

There are other interesting aspects about the time between the original picture of the newly planted tree and the more recent picture, which are not as obvious:

  • The neighbors widened their driveway with more concrete.
  • The neighbors painted the outside of their house.
  • We replaced the car in the full-size original picture.
  • We replaced the entire fence around our property, including the shared segment in view of both pictures.
  • Just as I had to repair the surrounding sprinkler system while originally planting the tree, I recently had to go underground to repair a node on the same line.

Getting Twitter

Twitter

Yeah, I know that Twitter lately is all about Oprah, CNN and Ashton Kutcher, but it’s also about brief remarks, gripes and triumphs related to products and/or services that you send into the world wide market. (And if you were waiting for The Tipping Point, it’s already occurred for Twitter, IMHO.)

BTW, before I go any further, I’m @craigsmusings on Twitter. (Thanks, Dan.)

If a tree falls in a forest, it always makes a noise–regardless of your presence there. There are social conversations that occur online (e.g. Facebook, blogs, wikis, Twitter, newsgroups, IIRC, etc.), and they will continue to occur regardless of your presence there, too. However, that’s an especially risky position to take these days–see the conversation but not engage.

Consider the following conversation on Twitter:

http://twitter.com/johnsmith

Very disappointed in _YOUR_PRODUCT_HERE_, does not appear to have very much to it at all….if anything!
12:10 AM Apr 23rd from TweetDeck

http://twitter.com/janedoe

@johnsmith Did you see a live presentation or play with it,
4:45 AM Apr 23rd from TwitterBerry

http://twitter.com/johnsmith

@janedoe Had a play with it, will blog later this week, does not seem to give us anything to use as an accelerator
4:52 AM Apr 23rd from TweetDeck in reply to janedoe

http://twitter.com/janedoe

@johnsmith Ouch! That’s the point in theory.
4:58 AM Apr 23rd from TwitterBerry

So, what will John Smith blog exactly? He’s indicated that his post is forthcoming but also that there may be time to engage him–understand his concern and possibly influence him after listening by demonstrating value.

Jane appears to be an interested party, too. Is Jane a known advocate, possibly trying to reach out on your behalf? Is Jane known to be skeptical?

How can you “see” this conversation?

I use TweetDeck, an Adobe AIR-based Twitter client, for my tweeting, etc. It works equally well on both MacOS and Windows. (There are many other clients out there, too!)

TweetDeck

TweetDeck allows me to do a number of useful things.

  • For example, the leftmost column/pane is a group. (You can read that tiny font, right? ;-) ) In my case, I filtered All Friends (i.e. those I follow in Twitter) into just the subset that tweets about content management. (You can see that there is a horizontal scroll bar on the bottom, and the default “All Friends” column/pane is off to the far right (where I moved it to reduce seen UI changes).)
  • The “Replies” lumn/pane is just what it implies–tweets in reply to me from others.
  • The “Direct Messages” column/pane contains DM’s from me and DM’s to me.
  • The two rightmost columns/panes in view above are searches. Since these are Twitter-based searches–one for tweets containing “CMIS” and another for tweets containing both “EMC” and “Documentum”–I receive traffic updates that apply in near realtime (unlike, e.g., a Google search that requires one to hit Refresh to see new results).

Anyway, I can visit John Smith’s Twitter profile to learn that he has a 70:30 ratio (i.e. he’s following 70 twitters and 30 twitters are following him). Clearly, Mr. Smith is not a “rock star” by Twitter standards. (Certainly, I am not either!)

However, consider the junior high campfire song’s sentiment: “It only takes a spark, to get a fire going…” This goes back to my point above: there may be time to engage him–understand his concern and possibly influence him after listening by demonstrating value (and create a positive fire–however big or small–about your product or service).

The truth is that, although I’ve been blogging for awhile now, I’m relatively new to Twitter. Fortunately for me, I have great resources in my “2.0 type” EMC colleagues and elsewhere online. For example, I recommend that you check out Gina Minks’ Twitter Cheat Sheet. (I understand from Gina that a v2.0 release is due out in time for EMC World, too.)

I recall during last year’s Microsoft Strategic Architect Forum (SAF) that a good industry colleague of mine suggested a “I don’t get Twitter” topic for the open space segment of that afternoon. I egged him on to make the suggestion; so, of course I attended…and I think that everyone learned a fair bit in the process.

Since then I’ve only recently begun to seriously tweet. Already that engagement has paid dividends, and due to the fact that most of my cross-domain architect colleagues don’t yet tweet, I thought I’d humbly offer this post to get them to “dive into” Twitter, too, in a way that’s both meaningful to them and meaningful to their communities. (You know who you are. :-) )

For those who weren’t at or don’t know about SAF, Microsoft worked with Mindjet to mind map the open space sessions. Here are the notes from the “I don’t get Twitter” session in mind map form–just click the following image for the .mmap (MindManager 8 format) file:

SAF08 topic - 'I don't get Twitter' (notes as mind map)

So, what do you think of Twitter? If you find it useful, how do you receive value from it?

Update 4/30/2009: Gina Minks just published a new cheat sheet for tweeting from your phone.

First 100K concurrent user ECM benchmark

Earlier today, EMC formally announced the results of its significant Enterprise Content Management benchmark with Microsoft and HP.

The newly released study is one of the largest-ever benchmarks in the ECM industry, demonstrating 100,000 [concurrent] users of Documentum 6.5 [SP1] engaging in a variety of content management-related transactions and sustaining that workload over the course of a 12-hour workday. Transactions included the most common content management activities.

Here’s a picture of the EMC-HP-Microsoft team:

EMC-HP-MSFT benchmark team

From left to right:

  • Surdeep Sharma – Microsoft SQL Server Premier Field Engineer
  • Pat Kirby – EMC Documentum Performance Engineer
  • Vishnu Badikol – EMC Documentum Performance Engineer
  • Joseph Isenhour – Microsoft Enterprise Engineering Center Program Manager
  • Gordon Newman – EMC Documentum Sr. Manager Performance Engineering
  • Gunter Zink (photographer hence not pictured) – HP Integrity Superdome Engineering Group Manager

Here’s another picture that I copied from Gordon showing Pat and Vishnu working with a type of “geek heaven” (i.e. a view of PerfMon on a 64-core machine :-) ):

What a 64-core PerfMon UI looks like

I encourage you to listen to my colleagues Gordon, Pat and Vishnu talk about their experiences configuring and running this benchmark via this video. While you’re watching the video, download the attached joint whitepaper, summary and detailed results, too.

Of course, EMC isn’t resting on its laurels here. In fact, we’re already working to address lessons learned from this benchmark, which will result in even greater scalability. Kudos to my Performance Engineering colleagues!

Update 5/27/2009: The benchmark FAQ is now available here.

Addressing MaxReceivedMessageSize issues

If you’re a .NET-based consumer of Enterprise Content Services (e.g. those offered via Documentum Foundation Services) and you experience a Windows Communication Foundation CommunicationException having to do with MaxReceivedMessageSize, you may be interested in the details of this post. This post applies both to direct-to-WSDL consumers and also to consumers that leverage the DFS productivity layer for .NET. Guidance herein has more to do with WCF in general; however, it will be offered in a ECS/DFS context.

Depending on the size of incoming messages from services to your application, you may discover the need to increase the maximum received message size. For example, your application experiences the following exception raised by WCF:

System.ServiceModel.CommunicationException : The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element.

An example of exceeding quota could be application requests that result in data package-based responses with a large number of data objects and/or a set of data objects with significant metadata and/or content (e.g. ObjectService.get).

If you implement a direct-to-WSDL consumer of this service using Visual Studio and WCF’s Add Service Reference designer, you will by default introduce a per service binding application configuration file into the overall solution. Therefore, to declaratively increase the maximum received message size, you will edit app.config by focusing on increasing the value of the MaxReceivedMessageSize attribute on the appropriate (named) binding element from the default value in configuration as follows:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <system.serviceModel>
    <bindings>
      <basicHttpBinding>
        <binding name="ObjectServicePortBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true">
          <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" />
          <security mode="None">
            <transport clientCredentialType="None" proxyCredentialType="None" realm="" />
            <message clientCredentialType="UserName" algorithmSuite="Default" />
          </security>
        </binding>
        . . .
      </basicHttpBinding>
    </bindings>
    . . .
  </system.serviceModel>
  . . .
</configuration>

As in the case of a direct-to-WSDL consumer, a productivity layer-based consumer of the DFS Object service may also need to declaratively increase the value of MaxReceivedMessageSize more compatible with actual runtime requirements.

In the etc\config directory path of your local DFS SDK you should find an example App.config file. Please note that this app.config file is oriented toward productivity layer consumers, not direct-to-WSDL consumers via WCF. That being said, the same binding attributes apply to a solution here, too. The difference is how the bindings are declared in app.config.

The productivity layer oriented declaration names a single binding, DfsDefaultService, to act as the binding for all DFS services, except for DFS runtime services, which have separate, named bindings declared. So, Object service gets its (WCF- based) binding configuration from the “DfsDefaultService” binding…and so does, for example, Query service.

To declaratively increase the maximum received message size in productivity layer oriented app.config, you will most likely edit the MaxReceivedMessageSize attribute on the “DfsDefaultService” binding element from the default value in configuration as follows:

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  . . .
  <system.serviceModel>
    <bindings>
      <basicHttpBinding>
        . . .
        <binding name="DfsDefaultService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="1000000" maxBufferPoolSize="10000000" maxReceivedMessageSize="1000000" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true">
          <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" />
          <security mode="None">
            <transport clientCredentialType="None" proxyCredentialType="None" realm="" />
            <message clientCredentialType="UserName" algorithmSuite="Default" />
          </security>
        </binding>
      </basicHttpBinding>
    </bindings>
  </system.serviceModel>
</configuration>

You may notice that the DFS SDK-based app.config binding element attribute values differ from direct-from-WCF defaults (i.e. maxBufferSize–1000000 versus 65536, maxBufferPoolSize–1000000 versus 524288, and maxReceivedMessageSize–1000000 versus 65536). This is simply a change to lessen the likelihood of encountering WCF CommunicationExceptions having to do with MaxReceivedMessageSize values.

One technique you can employ to determine what a reasonable MaxReceivedMessageSize value should be for your application is to set the value of your binding attribute/property to the absolute maximum in order to profile actual runtime message size using a web debugging proxy like Charles or Fiddler. That is, temporarily set MaxReceivedMessageSize to 2147483647 (i.e. Int32.MaxValue), pass your SOAP messages through, for example, Charles via port forwarding, review response message content length values, and reset your default runtime MaxReceivedMessageSize value accordingly.

If you prefer to take a declarative approach to WCF binding configuration for your application but you’re concerned about a user setting the value too low, you can always interrogate values at runtime in order to ensure that they’re sufficient.

For example, a productivity layer-based client could do as follows:

System.Reflection.FieldInfo appConfigInfo = typeof(ContextFactory).GetField("appConfig", System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.NonPublic);
System.Reflection.FieldInfo agentServiceBindingInfo = typeof(AppConfig).GetField("m_agentServiceBinding", System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.NonPublic);
System.Reflection.FieldInfo contextRegistryServiceBindingInfo = typeof(AppConfig).GetField("m_contextRegistryServiceBinding", System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.NonPublic);
System.Reflection.FieldInfo defaultServiceBindingInfo = typeof(AppConfig).GetField("m_defaultServiceBinding", System.Reflection.BindingFlags.Instance | System.Reflection.BindingFlags.NonPublic);
BasicHttpBinding binding = new BasicHttpBinding();
binding.MaxReceivedMessageSize = 0x7fffffffL;
binding.MaxBufferSize = 0x7fffffffL;
agentServiceBindingInfo.SetValue(appConfigInfo.GetValue(contextFactory), binding);
contextRegistryServiceBindingInfo.SetValue(appConfigInfo.GetValue(contextFactory), binding);
defaultServiceBindingInfo.SetValue(appConfigInfo.GetValue(contextFactory), binding);

Of course, in a production app, I’d ensure that there is a log (auditable event) of such programmatic override activity. I might also consider presenting the user with a suggestion, requesting that the software be given the opportunity to auto-correct the value (e.g. updating the effective application configuration file).

Building content-enabled applications

Both Pie and Marko have blogged about content-enabled applications, or what Gartner calls CEVAs (content-enabled vertical applications).

As it so happens, I’ll be presenting there will be a session on this subject next month at EMC World 2009.

Based on my research of what folks label a content-enabled application, two things rise to the top: process (surrounding content) and subject matter expertise (individual or group surrounding process), and context. OK, three things.

For example, Forrester defines content-centric applications as “solutions that put the business’ content to use, and add context along the way–to support line-of-business needs.” Example solutions include customer self-service, claims processing, proposal management, contract management, and case management.

Other CEVA vendors argue that content-enabled applications are process-oriented, not content-centric. I tend to prefer this viewpoint. A claim is valueless in itself. Only once is claim is processed is value realized, including taking a future liability off the books.

Content-enabled applications should facilitate the convergence of content, collaboration, interaction, and process.

Before you leverage your content in an application to generate value, ask yourself few questions:

  • Who uses the content? Why? How?
  • What processes does the content support?
  • If I’m not a subject matter expert for this type of content, who can I involve to design a better application experience?
  • What processes does it support?
  • What context is involved, either centrally or peripherally?

Start with something familiar to just about anyone these days: email (or IM, micro-blogging, etc.). Answer the questions. See how applications, for example, around email have evolved. Think about where current email applications may have untapped potential. Etc.

So, where have all the CEVAs gone (as Marko asks)?

  • I think that we in the content management business do ourselves a disservice by overly complicating concepts (e.g. behind TLAs or FLAs). Although fine as a conceptual catalyst, CEVA is self-defeating, IMHO, as a rallying label.
  • I agree that CMIS has great potential to increase the availability of content-enabled applications, if for no other reason, because application development that consumes the proposed standard should have a greater return on investment by being applicable to multiple content repositories. (ECM vendor partners are you listening?)
  • In the end, it’s the application, not the content or the process or the people. That is, if you’re just adding a document and perhaps a workflow to some code, you may have an app…but it won’t be used. Focus on user experience (i.e. the meaningful, intuitive presentation of content, context and process together).

Back to EMC World…          Orlando, FL - May17-21

I’ll miss interacting with ‘Zilla at the conference. It was at EMC World in 2007 (also held in Orlando, FL) that I first met Mark in person.

If you are able to make the conference and consider yourself to be a “2.0 type,” you may be interested in Len’s advert. Looks like there is even a LinkedIn event established for the conference.

I plan to tweet the conference and otherwise engage with the community. In the meantime, if you plan to attend my session (as presented by others), please feel free to comment (here or ECN) on your thoughts about content-enabled applications and what you’d like discussed or demoed. Thanks in advance.