Collaboration in the Enterprise from the perspective of Anthony Holmes, an IBM Accelerated Value Program Leader (Premium Support Program).

Pavlov’s Emails

Anthony Holmes  23 January 2012 13:07:02
Image:Pavlov’s Emails
Over the last two years I've made big changes to the way that I work as I've adopted IBM Connections. But I recently discovered that I've still got a way to go.

Email is classically addictive. The "you've got mail" sound, popup, flashing light, little email envelope, whatever, holds out the possibility that somebody has just sent you something very important/exciting/amusing. You salivate at the prospect, interrupt what you are doing and open the email. I've been weaning myself off hovering over my inboxes, but I haven't succeeded.

I jog around The Tan in Melbourne in the late evening 2-4 times each week. On the basis that you can't improve if you aren't measuring, I take it fairly seriously: I have a Polar RCX5 watch with heart monitor and GPS to let me measure my times and improve them. So you'd think I'd be pretty focussed while running.

The other night at about 11pm on a Friday I took my phone with me because I was on-call and needed to respond if I got a text. I didn't get any texts. Instead, half way through the run I saw that I had a new email.

The urge to read the email - perhaps while I was still running, was overwhelming.

I resisted. But as I sat down at the end of my run (and read the terribly unimportant email), I reflected on the fact that I am still hooked on email.

So:
  • If something is urgent, people can phone me or walk up to me.
  • They can try pinging me on Sametime, but if I don't respond, see above.
  • When I'm working, I will define when I read email, rather than email telling me.
  • As a general rule I will look at email when I want to take a clear break from another activity. At most every 1.5 to two hours.
  • Never, during an unplanned break, will I check my email. I will always wait until my previously nominated 'email checking' time.
  • When I get an email that I'm not actioning immediately, but which I need to respond to, I lift it out of the inbox and into either an eProductivity GTD action or an IBM Connections Activity. That means the task can then be queued and acted upon independently of the random incoherent stream of the inbox, and into a structured set of priorities.

Using Social Software like IBM Connections doesn't require that email be completely abandoned: it still has a useful role. But I think that it's very important to ensure that email works for you instead of the other way around.

How Network Latency affects Notes

Anthony Holmes  26 July 2011 22:36:57
For many years I've been aware that the Notes client can sometimes be more affected by network latency than bandwidth restrictions. I decided to measure the effect.

Here's what I did.

I obtained a copy of WANem, an open source Wide Area Network Emulator. For reasons that I don't quite understand, I wasn't able to get a laptop to boot WANem off a CD or USB stick, so instead I downloaded the VM Image. This proved easy to use and (unlike many WAN Emulators) documented in a way that was reasonably simple to understand.

I had a local area network with the following components:
  • A server on IP address 10.1.1.3
  • A client on IP address 10.1.1.4
  • A WANem VM running with 10.1.1.5 (I simply started WANem and let it obtain an IP address using DHCP on my local network.)

Before WANem was configured, ping times between the client and server were less than a millisecond.

Once WANem was running, I had to direct the client and the server to route via WANem. This was a simple case of entering the following commands from a Command Prompt:

Server:
route add 10.1.1.4 mask 255.255.255 10.1.1.5

Client:
route add 10.1.1.3 mask 255.255.255 10.1.1.5

I could then change latency by logging in to the WANem configuration screen using http://10.1.1.5/WANem.

In the display screen I could enter a number of milliseconds. For example, 100.

On either the Client or the Server, I could test that this was working by:
  • Performing a Tracert, which showed that it was now hopping via 10.1.1.5, or
  • Performing a ping, and noting the ping time in milliseconds.

When I chose 100 milliseconds in WANem, the total trip took twice as many milliseconds because it took just over 100ms to get to WANem and then 100ms to get to the destination.

Now that my method for setting latency was up and working, I could measure how different degrees of latency affected Notes. Many Notes actions can be affected by latency: Opening a database, scrolling through a view, opening an email, etc.. Timings can also be affected by the presence (or absence) of indexes. To provide a simple repeatable activity that could be measured without being changed by other things, I chose an action that wouldn't be affected by indexing or caching on the server: I composed new emails with a 1MB attachment and timed how long it took for cursor control to return after I pressed Send.

Here are the results:

Response times rising in a linear fashion as latency increases

It shows that network latency has a linear effect: 200 milliseconds produces twice the delay that 100 milliseconds produces. The timings reinforce my gut feeling that any level of latency below 150 milliseconds is likely to be acceptable, but people probably become quite tired the further they rise above that level. Network throughput and latency seems to be dropping as time goes on: keeping latency below those levels should be relatively straightforward within a country or a region. The speed of light means it becomes more difficult as you cross the globe: latency between Australia and the US is often around 200 milliseconds, so working with US servers from Australia is feasible, but local replicas would be better if you had to do it too often.

I'll think about whether any other tests would be useful. For example, forms in custom applications with lots of lookups can slow with latency. But I think this test gives a good start.

Of course, network latency can be hidden from users by using local replicas or (much better) the great new feature that hides replication from users: Managed Replicas.

Who knew they tried so hard: Prometric’s efforts to keep exams fair

Anthony Holmes  7 June 2011 18:56:44
The Economist has an article about steps used to detect cheating in the exams, with a fair bit of coverage of techniques used by Prometric, which manages Lotus Certification exams amongst many others.

The full article is in The Economist Technology Quarterly in the 4th June 2011 edition. A slightly shorter version of the article is online here:

http://m.economist.com/babbage-tech-21016445.php

It's astonishing to hear that they shut down about five test centres around the world each week.

There are a number of interesting comments:
  • If more than two people come up with the same pattern of right/wrong answers, it's a "racing certainty" that there's cheating involved.
  • Moving from one country to another to do a test can raise flags of suspicion.
  • Suddenly improving can raise suspicion (Where's the reward for doing lots of hard study?!)  :-)
  • Even the pattern of changing one answer to another while the exam is underway can be examined (I guess they're thinking about a number of people taking a test in a single room at the same time and sharing their answers?)
  • Changing a series of wrong answers to correct answers as you review the test is also suspicious (which probably means that I'm safe. I reckon I muck up a lot of previously correct answers when I review my exams. The best result I ever got was during a test I took very early one morning with a crashing hangover. I didn't review a single question, yet walked out with a score of 97%).

I wonder whether all of this effort also explains a question that had disturbed me on one exam. I was convinced that there was no correct answer to the question. I'm not normally good at exactly remembering a question and its answers, but I made a huge effort to remember this one, and went and researched it afterwards. I was never satisfied that any one of the four answers was correct for that question. Maybe it was a question that was designed to detect people working off cheat sheets?

TNEF Conversion: SMTP Server on the edge only

Anthony Holmes  31 May 2011 19:31:12
I finally proved to myself that TNEF Conversion only needs to be done at the Domino STMP server that first receives a message in TNEF format.

Almost four years ago (wow, that's a long time) I wrote about how Domino has a setting allows it to decode messages that have been sent using Microsoft's proprietary Transport Neutral Encapsulation Format which contained Outlook Rich Text. This was a format that even Outlook Express couldn't handle. Using the settings meant that Notes users didn't get winmail.dat or att0000... messages. This technote describes how you use the Notes.ini variable TNEFEnableConversion=1 (and possibly some other settings) to enable this feature.

What has never been 100% clear to me is which server(s) need to have TNEF conversion enabled. Is it only the Gateway Domino server that first receives the message, or is it the server that deposits the email into a mail file? Logically the SMTP server could pass on the message without examining the content: and it is only at the point of delivery that the choice needs to be made. I could find comments that sort of implied the opposite without being quite clear enough for my suspicious mind.

So today I proved it to myself. I had a bit of trouble generating a TNEF message (it looks like these days many Exchange sites override the user's choice and prevent TNEF messages being sent). To do my test I dug up a winmail.dat email I received back in 2001 and did the following:
  • I forwarded an email containing a winmail.dat attachment from one Domino system across the internet to another Domino SMTP server. The winmail.dat attachment arrived intact.
  • I turned on TNEFEnableConversion=1 on the receiving server. The winmail.dat attachment was removed. (Proving that forwarding an email with a winmail.dat extension was a valid way of testing TNEF conversion).
  • Finally, I moved the receiving mail file from the gateway SMTP server to another Domino server in the same Domain, one which didn't have TNEFEnableConversion=1. The winmail.dat attachment was still removed, proving that the conversion was happening on the Gateway server.

Image:TNEF Conversion: SMTP Server on the edge only
Old Pillar Post Box, near Carlton Gardens, Melbourne, Victoria, Australia.

IBM Collaboration Software Linux Compatibility Matrix

Anthony Holmes  14 May 2011 00:47:33
I am setting up some standard Vms with software installed, so I decided I needed a single table showing which versions each of the main IBM Collaboration Software Products works with. This made it easier to decide which flavour of Linux I should choose.

So here's a table showing which versions of Linux can be used with Notes 8.5.2, Domino 8.5.2, Sametime 8.5.1 and Connections 2.5. There are links to the source documents (the System Requirements documents) for each product at the bottom of the table. My apologies if I've made any mistakes.
IBM Collaboration Software/Linux (x86) Compatibility Table
Prepared 2011-05-14
Notes 8.5.2FP2 Domino 8.5.2FP2 Sametime 8.5.1 Gateway, Proxy, Media Manager, Community Server, System Console, Classic Meeting, Meeting Server Sametime 8.5.1 Connect Client Connection 2.5 Server Connections 2.5 Client
RHEL4U7 AS/ES x86-32 X (Kernel 2.6.9-86)
RHEL5.0 Advanced Platform x86-32 plus fix packs (XGL and SELinux must be disabled) X
RHEL5.0 Advanced Platform x86-62 plus fix packs (XGL and SELinux must be disabled) X
RHEL5.0 Advanced Platform x86-32 X
RHEL5.0 Advanced Platform x86-64 X
RHEL5.0U2Desktop x86-32 X (limitations)
RHEL5.0U4 Desktop x86-32 X
SLED10.0SP1 x86-32 X (XGL SP1)
SLED10.0SP3 x86-32 X X (limitations)
SLED11.0 x86-32 X X (limitations)
SLES10.0 x86-32 plus fix packs (XGL and SELinux must be disabled) X
SLES10.0 x86-64 plus fix packs (XGL and SELinux must be disabled) X
SLES10.0SP1 x86-32 X
SLES10.0SP2 x86-32 X
SLES10.0 x86-64 X
SLES11.0 x86-32 plus fix packs (XGL and SELinux must be disabled) X
SLES11.0 x86-64 plus fix packs (XGL and SELinux must be disabled) X
Ubuntu Desktop 8.04 x86-32 X X (limitations)
Ubuntu 10.04LTS x86-32 X
References
Notes 8.5.2 https://www-304.ibm.com/support/docview.wss?uid=swg27019218
Domino 8.5.2 https://www-304.ibm.com/support/docview.wss?uid=swg27019426
Sametime 8.5.1 https://www-304.ibm.com/support/docview.wss?uid=swg27019283
Connections 2.5 https://www-304.ibm.com/support/docview.wss?uid=swg27016547




Note: Sametime Connect for Linux has limitations compared with Sametime Connect running on Windows. See the Sametime 8.5.1 System Requirements document for more information.

Scheduled restarts for Domino Servers. Yes or No?

Anthony Holmes  12 May 2011 12:23:53
I've been asked the question: Should you schedule regular monthly restarts of Domino?

The answer is "Maybe". Or perhaps more correctly "If you want to".

There's nothing in Domino that REQUIRES a regular server restart, but there are some things that suggest you might want to think about it. There's one IBM Technote that specifically suggests it:

Lotus Domino Server Maintenance Tips

It says "Over time, Domino server memory allocation and fragmentation may cause performance degradation on memory constrained systems resulting in unscheduled server outages."

In a well sized server environment, memory should be ample for ongoing operations and both Domino and the Operating System should be avoiding excessive fragmentation. Here's a list of factors you might use to consider whether to schedule regular restarts.
Factor Response
High server availability is required (high service levels, end user sensitivity to unplanned outages). You would be more likely to schedule regular restarts.
A server environment has full redundancy: for example, clustered servers in different sites. You would be less likely to schedule regular restarts.
You are running older (perhaps unsupported) releases of Domino. You would be more likely to schedule regular restarts.
You are running 32 bit Operating Systems (with less available memory). You would be more likely to schedule regular restarts.
Your servers are running on old hardware, nearing capacity, or with large numbers of users. You would be more likely to schedule regular restarts.
Your servers are running large numbers of Domino tasks, especially add in tasks.

(Domino tasks such as POP, IMAP etc and third party products such as anti-virus, content scanning, enterprise archiving, user account management software using Domino add ins, LEI, programs that use the Domino CAPI etc.).
You would be more likely to schedule regular restarts. (Memory usage is likely to be higher.)
Your disk system is slow (queues regularly rising above 3) or your disk system is fragmented. You would be more likely to schedule regular restarts. (Memory is more likely to be exhausted while Domino waits for the disk queues to be purged.)
Your servers are restarted reasonably frequently (every few months) for other reasons, such as to apply OS patches. You would be less likely to schedule regular restarts. (But you might schedule an ad hoc restart if a server hadn't been restarted for an extended time.)




Two minor layout improvements when using Symphony 3

Anthony Holmes  4 April 2011 14:09:14
There are two small things that have annoyed me when using Symphony 3:
  • Bullet points and other special characters created in Word documents look weird when opened on my Macintosh in Symphony (and other OpenOffice based programs);

    They look like this (see how a bullet point has been replaced with an upward pointing caret symbol): Image:Two minor layout improvements when using Symphony 3

    and
  • There is no spacing between the number and the line on a numbered list

    This looks like this:  Image:Two minor layout improvements when using Symphony 3
    I'd prefer it to look like this: Image:Two minor layout improvements when using Symphony 3

Here are solutions to both issues:

1. Broken Bullet Points and special characters from Word

The upward pointing caret symbol is apparently an issue with the OpenOffice programs on Macintosh not mapping a Windows font. To fix it, go to Lotus Symphony; Preferences... and add the following:

Under Fonts, choose Apply font replacement, and add a replacement where the Font Symbol is always replaced with OpenSymbol.

Image:Two minor layout improvements when using Symphony 3
2. Ensuring there is a space between Numbers and Text with Bullets and Numbering...

Go to Layout; Bullets and numbering...

For each level of numbering (1, 2, 3...) place a "space" character after the full stop (period) in After.
Image:Two minor layout improvements when using Symphony 3
If you are using OpenOffice (or LibreOffice or NeoOffice), you will be able to make similar changes, although the exact location of the configuration screens may be slightly different.

Data-mining: my latest tool - DEVONthink

Anthony Holmes  26 March 2011 15:39:13
An important part of my job is to be able to find information on obscure topics quickly. Down the years I accumulated various files I had obtained (presentations, PDFs, Redbooks) into a set of folders I called my Suitcase. For redundancy I have copies of the Suitcase synchronised across a couple of drives.

The problem with storing files on a file system

Although my Suitcase was a great resource for finding information on topics, in truth it had severe limitations: I had to find information in a folder. Information about Sametime embedded in Notes might end up in the Notes folder or the Sametime folder. And often I found myself downloading a Redbook again because it was quicker than searching for it in the Suitcase. There were also many duplicate files.

Looking for a better alternative

In the back of my mind I often thought that the documents might go into a Notes database: copies could be replicated, so it wouldn't take long for changes to replicate (it would certainly be faster than running Chronosync across folders containing thousands and thousands of documents). Categories or Tagging could easily be applied. And the Notes client's ability to search attachments is superb, providing excellent search capabilities.

And putting it into a Notes database is something I might yet do at some stage.

But for the time being, there were obstacles for that: attaching each document to a Notes document is something that could be automated. But it would still be necessary to classify the documents. Maybe the folder names could be detected to provide the first attempt at classifying, but that involves some coding that would challenge my very limited coding skills.

Once upon a time there was a facility in Notes/Domino called the Domino Network File Store (DNFS): an perfect way to turn random file systems into a usable/searchable/replicate-able database. You could add or remove files via the file system (and applications) but they would be stored in a Database. Sadly it no longer exists as a feature. :-(

I played with Evernote for a while. I liked the ability to take meeting notes easily on my iPad, and the simplicity with which they appeared on my main computer. The meeting notes and any other document I added to Evernote could be tagged: but only in an Evernote format. I was left feeling that it wasn't rich enough for my needs.

And so to DEVONthink: easy to start using, lots of rich features

I toyed with DEVONthink about a year ago, using a trial version. Then I stopped using it for a while as I thought through my requirements. In December 2010 I picked up DEVONthink Personal 2 in a MacUpdate bundle. Very quickly I upgrade the licence to Devonthink Office Pro.

DEVONthink is easy to start using, but you are rewarded if you spend a bit of time getting to know its features. I purchased (the oddly named) Take Control of Getting Started with DEVONthink 2. Purchasing and reading the book is well worth it in terms of helping you adopt DEVONthink effectively. It gives suggestions on different approaches that you can use.

Adding my tens of thousands of documents (10+GB of files) to DEVONthink was very simple. You can either import them into a DEVONthink database, or have DEVONthink refer to file system folders. Once the DEVONthink database has access to the files, the fun begins. It was able to quickly identify many duplicate files, (even differently named files with identical contents). I could move files from one directory to another much more easily than was ever possible in Finder or Windows Explorer. I was able to quickly tidy up my folder structure: eliminate duplicate folder names. Even better, if a file related to more than one topic ("Notes" and "Sametime") it was trivial to create a "Replica" so that it appeared in all relevant locations.

An Auto-Classify option was an area where it starts to get really clever: the option Data; See Also and Classify uses some artificial intelligence to suggest documents and folders that might be related to one that you are looking at.

In addition, DEVONthink has a tagging system based on the MacOS X standard OpenMeta.

With DEVONthink Office Pro I simply save new documents into the Inbox, and every so often I go and classify them. Classifying each document only takes a few seconds.

I now take meeting notes on my iPad using DEVONthink To Go. It was a rather simple application, but it got a significant upgrade yesterday, and for my purposes it is as useful as Evernote was. It is also very simple to share documents from my Mac to my iPad by simply replicating them to my mobile folder in DEVONthink and synchronising my iPad.

I now have a suite of three Research Tools

I now have three main tools for research and storing information:
Tool Where I use it
Lotus Notes
Image:Data-mining: my latest tool - DEVONthink
I have 12 years of mail messages stored in my Lotus Notes archive mail file. It's 10GB in size, but intelligent use of a proper Full Text index (date ranges, limiting the names being searched, proximity terms etc.) mean that digging up old emails, attachments and instant messaging chats is a cinch. At 10GB, searching a local mail file with an index performs extremely well, and the mail file could still grow in size another six times before I have a problem with space.
DEVONthink
Image:Data-mining: my latest tool - DEVONthink
I'm using this to store local files and I benefit from its search, tagging, replicate, see also & classify and OCR capabilities (mostly described above).
IBM Lotus Connections
Image:Data-mining: my latest tool - DEVONthink
I'll write about this separately. But in short, this is where I go to find information within IBM: and by "information" I mean documents, people, bookmarks, weblinks, discussions, tweets, communities of interest... anything that is being thought about or worked upon by the community of minds working for IBM.


Paragon Support

Anthony Holmes  26 March 2011 14:12:29
Here's a small note of credit directed towards Paragon Software Group, maker of NTFS for Mac OS X.

When I upgraded my main home PC to a Mac Pro, I was a little concerned that the Mac could only read from NTFS disks, it couldn't write to them. Since I wasn't yet completely committed to Macintosh, I was a little concerned about formatting my archive disks with Macintosh file systems. What if I needed to read a disk in a few years time and no longer had access to a Macintosh?

So I purchased Paragon's NTFS for Mac OS X 7. Early this year I upgraded it to NTFS for Mac OX 8. At some stage it started behaving badly: sometimes I'd go to open a folder and nothing would happen for a few minutes, or even forever. Occasionally NTFS disks couldn't be opened at all. Applications that accessed an NTFS disk could hang. So I did what any self respecting Techie would do first: googled the problem. There were a few people on the web complaining that NTFS for Mac OS X was buggy, and they had no solutions. But in one or two places the suggestion had been made to people having a problem that they should log a support call with Paragon.

Log a support call for a consumer product? Perish the thought. That couldn't possibly work? Could it?

So, against all my "solve it yourself" instincts, I logged a call. Paragon's service level for a web based Service Request was three days - which wasn't stunning, but in my case bearable. And exactly within their time frame, they came back with a set of troubleshooting steps. Which identified a file that they said should be removed. I removed it. And now the product seems to be working correctly again.

On the one hand they did something simple: they gave me a pre-canned set of steps that probably solves a large percentage of issues with Paragon NTFS for Mac OS X 8.
On the other hand, I only ever paid US$40 + US$10 (Upgrade) to this company. (And when I look today it only costs US$19.95.) But they gave me personal attention to my problem, which was solved on first contact. That means I'm happy to keep using the product and to recommend it to others.

Hard Disk image from Wikimedia Commons: http://commons.wikimedia.org/wiki/File:Hdd_od_srodka.jpg

Every disaster is a little bit different (Brisbane)

Anthony Holmes  13 January 2011 10:52:44
When Brisbane last had a serious flood in 1974, there would only have been a handful of data centres in the city: as well as computers being much less pervasive, it was a city of only half its current population.

Over the years I've seen a few IT "disasters" of varying natures:
  • A data room in a basement that didn't fair well when the sprinklers went off in floors above it
  • A data centre that simply "lost" power in an instant: no UPS protection, simply "blip" and the power was gone while the servers were in full flight... and they had a tedious amount of consistency checking when they came back up (fortunately some servers had Transaction Logging)
  • A SAN issue that also affected backups... don't ask how that happened

Yesterday one of my customers had to cope with the sudden announcement that their Brisbane office was being evacuated because of the impending flood (something they expected) and that the power was going to be turned off (that was more of a surprise). The computer room is safely out of the water, but the neighbourhood isn't.

This customer is completely prepared for a disaster that destroys the office: they have offsite backups (in Brisbane, but safely located). But a situation that shuts down the office for a couple of days and then reopens it is a little different. If all they did was turn off the server for a couple of days, mail would sit in the hub server's mail.box and then get returned the sender. They don't want that to happen.

As a relatively simple solution, they have now created temporary mail files on a server in another city and redirected the person documents to those mail files. It's not something they ever simulated in DR tests: but - given that the business never required multi-site replication of mail files - it's a pretty good solution to come up with at relatively short notice.

Image:Every disaster is a little bit different (Brisbane)
The Wheel of Brisbane in flood (from Wikipedia Commons here by user PMBO).