Monthly Archives: February 2008

Sunday Funny: “NOT FOR EXPORT”

Back around 1998, the company I worked for at the time received some funding to create a new e-commerce product.  We had the full gamut of business requirements to meet.  It had to be fast, easy for end users, flashy, multi-language, etc.  Sad to say, I probably haven’t had as an ambitious set of work to accomplish since those heady days.

This effort pre-dated Microsoft.NET.  Plain vanilla ASP was still somewhat new (or least very unfamiliar to my company).  "Brick and mortar" companies were doomed.  Doomed!  This is to say that it was pioneering work.  Not Hadron Collider pioneering work, but for us in our little world, it was pioneering work.

We were crazy busy.  We were doing mini POC’s almost every day, figuring out how to maintain state in an inherently stateless medium, figuring out multi-language issues, row-level security.  We even had create a vocabulary to define basic terms (I preferred state-persistent but for some reason, the awkward "statefull" won the day).

As we were madly inventing this product, the marketing and sales people were out there trying to sell it.  Somehow, they managed to sell it to our nightmare scenario.  Even though we were designing and implementing an enterprise solution, we really didn’t expect the first customer to use every last feature we built into the product day zero.  This customer needed multi-language, a radically different user interface from the "standard" system but with the same business logic.  Multi-language was especially hard in this case, because we always focused on Spanish or French, but in this case, it was Chinese (which is a double-byte character set and required special handling given the technology we used).

Fast forward a few months and I’m on a Northwest airlines flight to Beijing.  I’ve been so busy preparing for this trip that I have almost no idea what it’s like to go there.  I had read a book once about how an American had been in China for several years and had learned the language.  One day he was walking the city and asked some people for directions.  The conversation went something this:

  • American: "Could you tell me how to get to [XX] street?"
  • Chinese: "Sorry, we don’t speak English".
  • American: "Oh, well I speak Mandarin." and he asked them again in Chinese, but more clearly (as best he could).
  • Chinese: Very politely, "Sorry, we don’t speak English".

The conversation went on like that for bit and the American gave up in frustration.  As he was leaving them he overheard one man speaking to the other, "I could have sworn he was asking for directions to [XX] street."

I had picked up a few bits and pieces of other China-related quasi-information and "helpful advice":

  • A Korean co-worked told me that the I needed to be careful of the Chinese because "they would try to get me drunk and take advantage of you" in the sense of pressuring me into bad business decisions.
  • We were not allowed to drive cars (there was some confusion as to whether this was a custom, a legal requirement or just the client’s rule).
  • There were special rules for going through customs.
  • We were not allowed to use American money for anything.
  • You’re not supposed to leave tips.  It’s insulting if you do.

And finally, I had relatively fresh memories the Tiananmen massacre.  When I was at college, I remember seeing real-time Usenet postings  as the world looked on in horror.

In short, I was very nervous.  I wasn’t just normal-nervous in the sense that I was delivering a solution that was orders of magnitude more complicated than anything I had ever done before.  I was also worried about accidentally breaking a rule that could get me in trouble.

I’m on this 14 hour flight and though it was business class, 14 hours is a damned long time. There are only so many ways to entertain yourself by reading, watching movies or playing with the magnetized cutlery.  Even a really good book is hard to read for several hours straight. 

Eventually, I started to read the packaging material on a piece of software I was hand-carrying with me to the client, Netscape’s web server.  I’m reading the hardware/software requirements, the marketing blurbs, looking at the pretty picture and suddenly, I zero in on the giant "NOT FOR EXPORT" warning, something about 128 bit encryption.  I stuffed the box back into my carry bag, warning face-down (as if that would have helped) and tried to keep visions of Midnight Express out of my head. 

Looking back on it now, I should have been worried, if at all, when I left the U.S., not when I was entering China 🙂  Nothing untoward happened and I still consider that to be the best and most memorable business trip I’ve had the pleasure of making.

</end>

Subscribe to my blog!

Technorati Tags: ,

Solution: SPQuery Does Not Search Folders

This past week I was implementing an "evolving" solution for a client that uses BDC and SPQuery and ran into some difficulty using SPQuery against a document library containing folders.  Bottom line: assign "recursive" to the view attribute of the query.

My scenario:

  • On Monday, I upload a document and supply some meta data.
  • The following week, I upload a new document.  Much of this new document’s meta data is based on the document I uploaded on Monday (which we call the "master document").
  • We’ve created a web service façade that provides a BDC-friendly interface to the list so that users can easily locate that Monday document via a title search.
  • A BDC data column provides a friendly user interface.  (This is part of my attempt at using BDC for a more friendly Lookup column).

The final BDC façade service uses a query like this to do the lookup:

 // Used U2U tool to assist in generating this CAML query.
      oQuery.Query =
        "<Where>";

      if (titleFilter.Length > 0)
        oQuery.Query +=
          "  <And>";

      oQuery.Query +=
        "    <And>" +
        "      <Geq>" +
        "        <FieldRef Name=\"DocumentId\" />" +
        "        <Value Type=\"Text\">" + minId + "</Value>" +
        "      </Geq>" +
        "      <Leq>" +
        "        <FieldRef Name=\"DocumentId\" />" +
        "        <Value Type=\"Text\">" + maxId + "</Value>" +
        "      </Leq>" +
        "    </And>";

      if (titleFilter.Length > 0)
        oQuery.Query +=
          "    <Contains>" +
          "      <FieldRef Name=\"Title\" />" +
          "      <Value Type=\"Text\">" + titleFilter + "</Value>" +
          "    </Contains>" +
          "  </And>";
      oQuery.Query +=
        "</Where>";

During the initial stage of development, this worked great.  However, we introduced folders into the directory to solve some problems and suddenly, my BDC picker wouldn’t return any results.  I tracked this down to the fact that the SPQuery would never return any results.  We used folders primarily to allow multiple files with the same name to be uploaded but with different meta data.  When the file is uploaded, we create a folder based on the list item’s ID and then move the file there (I wrote about that here; we’ve had mixed results with this approach but on the whole, it’s working well).  The user don’t care about folders and in fact, don’t really understand that there are any folders.  We have configured all the views on the library to show items without regard to folders.

I hit this problem twice as the technical implementation evolved and solved it differently each time.  The first time, I wasn’t using the CONTAINS operator in the query.  Without a CONTAINS operator, I was able to solve the problem by specifying the view on the SPQuery’s contructor.   Instead of using the default constructor:

SPList oList = web.Lists["Documents"];

SPQuery oQuery = new SPQuery();

I instead used a constructor that specified a view:

SPList oList = web.Lists["Documents"];

SPQuery oQuery = new SPQuery(oList.Views["All Documents"]);

That solved the problem and I started to get my results.

I then added the CONTAINS operator into the mix and it broke again.  It turns out that the CONTAINS operator, so far as I can tell, does not work with the view the same way as the a simpler GEQ / LEQ operators.  I did some searching and learned that the query’s ViewAttributes should be set to "Recursive", as in:

oQuery.ViewAttributes = "Scope=\"Recursive\"";

That solved the problem for CONTAINS.  In fact, this also solved my original search problem and if I had specified the recursive attribute the first time, I would not have run into the issue again.

The fact that a view-based SPQuery works for some operators (GEQ/LEQ) and not others (CONTAINS), coupled with the fact that KPIs don’t seem to work at all with folder-containing document libraries leads me to believe that SPQuery has some orthogonality issues.

Special Thanks:

  • The good folks at U2U and their query tool.
  • Michael Hoffer’s great "learning by doing" blog post, comments and responses.

</end>

 Subscribe to my blog!

MOSS KPI bug? List Indicator Tied to Document Library With Folders

 

UPDATE 02/29/08: I solved this problem by creating a folder and then assigning a content type to the folder which has the meta data I need for the KPIs.  I described that in a little more detail here.

We have implemented a technical solution where users upload documents to a document library.  An event receiver creates a directory and moves the file to that directory (using a technique similar to what I wrote about here).  We’ve successfully navigated around the potential issues caused by event receivers that rename uploaded files (mainly because users never start their document by clicking on "New" but instead create the docs locally and then upload them).

The meta data for these documents includes a Yes/No site column called "Urgent" and another site column called "Status".  We need to meet a business requirement that shows the percentage of "Urgent" documents whose status is "Pending".

This is usually simple to do and I described something very much like this at the SharePoint Beagle with lots of screen shots if you’re interested.

In a nutshell, I did the following:

  • Create a view on the doc library called "Pending".
  • Configure the view to ignore folder structure.
  • Create a KPI List.
  • Create an indicator in the list that points to the doc lib and that "Pending" view.

This simply does not work.  The KPI shows my target (e.g. five urgent documents) but always shows the actual number of urgent documents as zero.  Paradoxically, if you drill down to the details, it shows the five urgent documents in the list.  I created a very simple scenario with two documents, one in a folder and one not.  Here is the screen shot:

image

The above screen shot clearly shows there are two documents in the view but the "value" is one.  The "CamlSchema" with blank document Id is in the root folder and the other is in a folder named "84".

It appears to me that even though you specify a view, the KPI doesn’t honor the "show all items without folders" setting and instead, confines itself to the root folder.

If I’m wrong, please drop me a line or leave a comment.

</end>

Subscribe to my blog!

 

Technorati Tags:

SPD Workflow “Collect Data From A User”: Modify the Generated Task Form

I’m working on a project that uses five different SharePoint Designer work flows to handle some document approvals.  SPD provides the "collect data from a user" action so that we can prompt the user for different bits of information, such as whether they approve it, some comments and maybe ask what they had for dinner the other night.

The forms are perfectly functional.  They are tied to a task list as a content type.  They are 100% system-generated.  This is their strength and weakness.  If we can live with the default form, then we’re good to go.  However, we don’t have too much control over how SPD creates the form.  If we don’t like that default behavior, we need to resort to various tricks to get around it (for example, setting priority on a task). 

I needed to provide a link on these task forms that opened up the view properties (dispform.asxp) of the "related item" in a new window.  This provides one-click access to the meta data of the related item.  This is what I mean:

image

Thankfully, we can do that and it’s not very hard.  Broadly speaking, fire up SPD, navigate to the directory that houses the workflow files and open the ASPX file you want to modify.  These are just classic XSL transform instructions and if you’ve mucked about with itemstyle.xsl, search or other XSL scenarios, this will be easy for you.  In fact, I found it to be generally easier since the generated form is somewhat easier to follow as compared to a search core results web part (or the nightmarish CWQP).

Of course, there is one major pitfall.  SPD’s workflow editor expects full control over that file.  If you modify it, SPD will happily overwrite your changes give the right set of circumstances.  I did two quick tests to see how bad this could get.  They both presuppose that you’ve crafted a valid SPD workflow that uses the "collect data from a user" step.

Test 1:

  • Modify the ASPX file by hand.
  • Test it (verify that your changes were properly saved and didn’t break anything).
  • Open up the workflow and add an unrelated action (such as "log to history").
  • Save the workflow.

Result: In this case, SPD did not re-create the form.

Test 2:

  • Do the same as #1 except directly modify the "collect data from a user" action.

Result: This re-creates the form from scratch, over-writing your changes.

Final Notes:

  • At least two SPD actions create forms like this: "Collect Data From a User" and "Assign To Do Item".  Both of these actions’ forms can be manually modified.
  • I was able to generate my link to dispform.aspx because, in this case, the relate item always has its ID embedded in the related item’s URL.  I was able to extract it and then build an <a href> based on it to provide the one-click meta data access feature.  It’s unlikely that your URL follows this rule.  There may be other ways to get the ID of the related item but I have not had to cross that bridge, so I don’t know if gets to the other side of the chasm.
  • I didn’t investigate, but I would not be surprised if there is some kind of template file in the 12 hive that I could modify to affect how SPD generates the default forms (much like we can modify alert templates).

</end>

Subscribe to my blog!

Technorati Tags: ,

Are “Unknown Error” Messages Really Better Than a Stack Trace?

I was reading Madhur’s blog post on how to enable stack trace displays and now I’m wondering: why don’t we always show a stack trace?

Who came up with that rule and why do we follow it?

End users will know something is wrong in either case.  At least with a stack trace, they can press control-printscreen, copy/paste into an email and send it to IT.  That would clearly reduce the time and effort required to solve the issue.

</end>

Technorati Tags:

Sunday (Embarrassing) Funny: “My Name is Paul Galvin”

A bunch of years ago, my boss asked me to train some users on a product called Results.  Results is an end user reporting tool.  It’s roughly analogous to SQL Server Reporting Service or Crystal.  At the time, it was designed to run on green tubes (e.g. Wyse 50 terminal) connected to a Unix box via telnet. 

My default answer to any question that starts with "Can you … " is "Yes" and that’s where all the trouble started.

The client was a chemical company out in southern California and had just about wrapped up a major ERP implementation based on QAD’s MFG/PRO.  The implementation plan now called for training power end users on the Results product.

I wasn’t a big user of this tool and had certainly never trained anyone before.  However, I had conducted a number of other training classes and was quick on my feet, so I was not too worried.  Dennis, the real full-time Results instructor, had given me his training material.  Looking back on it now, it’s really quite absurd.  I didn’t know the product well, had never been formally trained on it and had certainly never taught it.  What business did I have training anyone on it? 

To complicate things logistically, I was asked to go and meet someone in Chicago as part of a pre-sales engagement along the way.  The plan was to fly out of New Jersey, go to Chicago, meet for an hour with prospect and then continue on to California. 

Well, I got to Chicago and the sales guy on my team had made some mistake and never confirmed the meeting.  So, I showed up and the prospect wasn’t there.  Awesome.  I pack up and leave and continue on to CA.  Somewhere during this process, I find out that the client is learning less than 24 hours before my arrival that "Paul Galvin" is teaching the class, not Dennis.  The client loves Dennis.  They want to know "who is this Paul Galvin person?"  "Why should we trust him?"  "Why should we pay for him?"  Dennis obviously didn’t subscribe to my "give bad news early" philosophy.  Awesome.

I arrive at the airport and for some incredibly stupid reason, I had checked my luggage.  I made it to LAX but my luggage did not.  For me, losing luggage is a lot like going through the seven stages of grief.  Eventually I make it to the hotel, with no luggage, tired, hungry and wearing my (by now, very crumpled) business suit.  It takes a long time to travel from Newark — to O’Hare — to a client — back to O’Hare — and finally to LAX.

I finally find myself sitting in the hotel room, munching on a snickers bar, exhausted and trying to drum up the energy to scan through the training material again so that I won’t look like a complete ass in front of the class.   This was a bit of a low point for me at the time.

I woke up the next day, did my best to smooth out my suit so that I didn’t look like Willy Loman on a bad day and headed on over to the client.  As is so often the case, in person she was nice, polite and very pleasant.  This stood in stark contrast to her extremely angry emails/voicemails from the previous day.  She leads me about 3 miles through building after building to a sectioned off area in a giant chemical warehouse where we will conduct the class for the next three days.  The 15 or 20 students slowly assemble, most them still expecting Dennis. 

I always start off my training classes by introducing myself, giving some background and writing my contact information on the white board.  As I’m saying, "Good morning, my name is Paul Galvin", I write my name, email and phone number up on the white board in big letters so that everyone can see it clearly.  I address the fact that I’m replacing Dennis and I assure them that I am a suitable replacement, etc. I have everyone briefly tell me their name and what they want to achieve out of the class so that I can tailor things to their specific requirements as I go along.  The usual stuff.

We wrap that up and fire up the projector.  I go to erase my contact info and … I had written it in permanent marker.   I was so embarrassed.  In my mind’s eye, it looked like this: There is this "Paul Galvin" person, last minute replacement for our beloved Dennis.  He’s wearing a crumpled up business suit and unshaven.  He has just written his name huge letters on our white board in permanent marker.  What a sight! 

It all ended happily, however.  This was a chemical company, after all.  A grizzled veteran employee pulled something off the shelf and, probably in violation of EPA regulations, cleared the board.  I managed to stay 1/2 day ahead of the class throughout the course and they gave me a good review in the end.  This cemented my "pinch hitter" reputation at my company.  My luggage arrived the first day, so I was much more presentable days two and three.

As I was taking the red eye back home, I was contemplating "lessons learned".  There was plenty to contemplate.  Communication is key.   Tell clients about changes in plan.  Don’t ever check your luggage at the airport if you can possibly avoid it.  Bring spare "stuff" in case you do check your luggage and it doens’t make it.  I think the most important lesson I learned, however, was this: always test a marker in the lower left-hand corner of a white board before writing, in huge letters, "Paul Galvin".

</end>

Technorati Tags: ,

Perspectives: SharePoint vs. the Large Hadron Collider

Due to some oddball United Airlines flights I took in the mid 90’s, I somehow ended up with an offer to transform "unused miles" into about a dozen free magazine subscriptions.  That is how I ended up subscribing to Scientific American magazine.

As software / consulting people, we encounter many difficult business requirements in our career.  Most the time, we love meeting those requirements and in fact, it’s probably why we think this career is the best in the world.  I occasionally wonder just what in the world would I have done with myself if I had been born at any other time in history.  How terrible would it be to miss out on the kinds of work I get to do now, at this time and place in world history?  I think: pretty terrible.

Over the years, some of the requirements I’ve faced have been extremely challenging to meet.  Complex SharePoint stuff, building web processing frameworks based on non-web-friendly technology, complex BizTalk orchestrations and the like.  We can all (hopefully) look proudly back on our career and say, "yeah, that was a hard one to solve, but in the end I pwned that sumbitch!"  Better yet, even more interesting and fun challenges await.

I personally think that my resume, in this respect, is pretty deep and I’m pretty proud of it (though I know my wife will never understand 1/20th of it).  But this week, I was reading an article about the Large Hadron Collider in my Scientific American magazine and had one of those rare humbling moments where I realized that despite my "giant" status in certain circles or how deep I think my well of experience, there are real giants in completely different worlds. 

The people on the LHC team have some really thorny issues to manage.  Consider the Moon.  I don’t really think much about the Moon (though I’ve been very suspicious about it since I learned it’s slowing the Earth’s rotation, which can’t be a good thing for us Humans in the long term).  But, the LHC team does have to worry.  LHC’s measuring devices are so sensitive that they are affected by the Moon’s (Earth-rotation-slowing-and-eventually-killing-all-life) gravity.  That’s a heck of a requirement to meet — produce correct measurements despite the Moon’s interference.

I was pondering that issue when I read this sentence: "The first level will receive and analyze data from only a subset of all the detector’s components, from which it can pick out promising events based on isolated factors such as whether an energetic muon was spotted flying out at a large angle from the beam axis."  Really … ?  I don’t play in that kind of sandbox and never will.

Next time I’m out with some friends, I’m going to raise a toast to the good people working on the LHC, hope they don’t successfully weigh the Higgs boson particle and curse the Moon.  I suggest you do the same.  It will be quite the toast 🙂

</end>

Technorati Tags:

Quick Impression: System Center Capacity Planner for SharePoint

I just fired up the capacity planning tool that’s all the rage these days

I found it easy to use and quickly modeled a client environment I worked on this past summer.

With some trepidation, I pressed the final OK button and it recommended something that is pretty similar to what we gave our client (we actually threw in a second application server for future excel use).  I take that to be a good sign and increases my confidence in the tool.

It seems pretty powerful stuff a much better starting point than a blank page.

I like that lets you get into some good detail about the environment.  How many users, how you project they will use the system (publishing, collaboration, etc), branch office and connectivity / network capacity between them and the mama server.  Good stuff.

It asks broad based questions and then lets you tweak the details for a pretty granular model of your environment.

I hesitated downloading it because I have so many other things to look at it, read and try to digest.  I’m glad I did.

It’s an easy two-step process.  Download system center capacity planner and then download the SharePoint models.  It runs nicely on Windows XP.

Based on my quick impression, I don’t see how it might account for:

  • Search: Total documents, maybe types of documents, languages.
  • Excel server: how much, if at all?
  • Forms server: how much, if at all?
  • BDC: how much, if at all.

Those may be modeled and I just didn’t see them in the 10 minute review.

I will definitely use it at my next client.

If I were not a consultant and instead working for a real company :), I’d model my current environment and see how the tool’s recommended model matches up against reality.  That would be pretty neat.  It could lead to some good infrastructure discussion.

</end>

Technorati Tags:

Solution: System.IO.FileNotFoundException on “SPSite = new SPSite(url)”

 

UPDATE: I posted this question to MSDN here (http://forums.microsoft.com/Forums/ShowPost.aspx?PostID=2808543&SiteID=1&mode=1) and Michael Washam of Microsoft responded with a concise answer. 

I created a web service to act as a BDC-friendly facade to a SharePoint list.  When I used this from my development environment, it worked fine. When I migrated this to a new server, I encountered this error:

System.IO.FileNotFoundException: The Web application at http://localhost/sandbox could not be found. Verify that you have typed the URL correctly. If the URL should be serving existing content, the system administrator may need to add a new request URL mapping to the intended application. at Microsoft.SharePoint.SPSite..ctor(SPFarm farm, Uri requestUri, Boolean contextSite, SPUserToken userToken) at Microsoft.SharePoint.SPSite..ctor(String requestUrl) at Conchango.xyzzy.GetExistingDocument(String minId, String maxId, String titleFilter) in C:\Documents and Settings\Paul\My Documents\Visual Studio 2005\Projects\xyzzy\BDC_DocReview\BDC_DocReview\DocReviewFacade.asmx.cs:line 69

Here is line 69:

using (SPSite site = new SPSite("http://localhost/sandbox"))

I tried different variations on the URL, including using the server’s real name, its IP address, trailing slashes on the URL, etc.  I always got that error. 

I used The Google to research it.  Lots of people face this issue, or variations of it, but no one seemed to have it solved.

Tricksy MOSS provided such a detailed error that it didn’t occur to me to check the 12 hive logs.  Eventually, about 24 hours after my colleague recommended I do so, I checked out the 12 hive log and found this:

An exception occured while trying to acquire the local farm:
System.Security.SecurityException: Requested registry access is not allowed.
at System.ThrowHelper.ThrowSecurityException(ExceptionResource resource) at
Microsoft.Win32.RegistryKey.OpenSubKey(String name, Boolean writable) at
Microsoft.Win32.RegistryKey.OpenSubKey(String name) at
Microsoft.SharePoint.Administration.SPConfigurationDatabase.get_RegistryConnectionString() at
Microsoft.SharePoint.Administration.SPConfigurationDatabase.get_Local() at
Microsoft.SharePoint.Administration.SPFarm.FindLocal(SPFarm& farm, Boolean& isJoined)
The Zone of the assembly that failed was:  MyComputer

This opened up new avenues of research, so it was back to The Google. That led me to this forum post: http://forums.codecharge.com/posts.php?post_id=67135.  That didn’t really help me but it did start making me think there was a database and/or security issue.  I soldiered on and Andrew Connell’s post finally triggered the thought that I should make sure that the application pool’s identity account had appropriate access to the database.  I thought it already did.  However, my colleague went and gave the app pool identity account full access to SQL.

As soon as she made that change, everything started working. 

What happened next is best expressed as a haiku poem:

Problems raise their hands.
You swing and miss.  Try again.
Success!  But how?  Why?

She didn’t want to leave things alone like that, preferring to give the minimum required permission (and probably with an eye to writing a blog entry; I beat her to the punch, muhahahahaha!).

She removed successive permissions from the app pool identity account until … there was no longer any explicit permission for the app pool identity account at all.  The web service continued to work just fine.

We went and rebooted the servers.  Everything continued to work fine.

So, to recap: we gave the app pool identity full access and then took it away.  The web service started working and never stopped working.  Bizarre.

If anyone knows why that should have worked, please leave a comment. 

</end>

Technorati Tags: