By Category By Month Recent Posts
Internet IPOs Internet M&A Software IPOs Software M&A

Main | February 2004 »

01/29/2004

EMC + Documentum = War for Control of Unstructured Data

One of the most interesting recent acquisitions in the software space was EMC’s purchase of Documentum. Not because it was a particularly large acquisition in terms of dollar size or premium paid but because of the strategic implications it has for much of the software industry, especially for companies in the content management, storage software, and database markets.

Documentum isn’t the only acquisition EMC has made recently. It also acquired storage software maker Legato Systems and virtualization leader VmWare. However, both of these acquisitions can be seen as incremental expansions of EMC’s existing focus on storage and storage management (and probably a competitive response to some of the moves storage players like Veritas have been making) whereas the Documentum acquisition represents a major leap “up the stack” right into the midst of the classic enterprise software space.

By boldly jumping into the enterprise software space, EMC appears to be, as a fighter pilot might say, “going vertical”. They are making a bet that customers will want to buy the “whole enchilada” from one vendor including not just platters and storage management software, but high-level content management and work flow software as well. Indeed, one can reasonably expect that the logical extension of this strategy will be a series of vertical solutions targeted at specific applications such as claims processing, image management, content publishing, e-mail management, etc.

By providing the entire solution (no doubt delivered by its services division), EMC should theoretically be able to improve margins by focusing its customers on the value of the entire bundled solution as opposed to simply the cost/gigabyte of its storage products.

Even more important than this solutions focus though, is that fact that EMC is trying to stake a claim to the entire unstructured data management space. EMC’s drive to do this has no doubt been influenced by their customers who are increasingly buying additional storage not to supplement existing databases or information warehouses, but to store and manage unstructured data such as e-mails, PowerPoint presentations, and web pages.

That EMC can stake a claim to the unstructured data management space without alienating some of its biggest ISV partners (the database and warehouse vendors) has much to do with the fact that traditional RDBMS vendors have been surprisingly reluctant to make major commitments to the unstructured data management space. Many of these players, led by Oracle, continue to hold on to the outdated belief that all of an enterprise’s information will be managed by RDBMSs and therefore they have made few attempts to expand into unstructured data management. This abdication has in turn opened the door for EMC to make a move without causing massive near term channel problems.

That’s not to say that EMC’s move into the unstructured data management space won’t ruffle more than a few feathers. Those that will feel the brunt of this entry are the remaining content management players such as FileNet, Interwoven, Stellent and Vignette. These firms must now contend with a very large, aggressive competitor selling hardware/software bundles. In addition, EMC’s traditional storage software vendors must now consider whether or not they will respond by making their own forays into the unstructured data management space. Veritas in particular will now have some difficult decisions to make. Finally, by letting such a large, aggressive company as EMC into their back yard, the traditional RDBMS players are going to have to make a decision as to whether or not they continue to hold on to their anti-file system views or they respond by delivering their own unstructured data management solutions.

Caught in the crossfire between all of these behemoths will be the existing unstructured data players in content management, search, categorization, taxonomy, and work-flow. These relatively small players (many of them still start-ups) will have to decide if it is better to sell out to one of the big players moving into their space or to solider on and attempt to carve out a defensible niche. In this sense, EMC’s acquisition of Documentum represents just the first shot by a major player in what is likely to be a long conflict for control of the unstructured data management space.

January 29, 2004 in Content Managment, Database, Stocks | Permalink | Comments (2)

01/27/2004

The Data Centric Web

A revolution is quietly brewing on the Internet. It’s a revolution that will ultimately expand the web into something far more useful and productive than it is today and one that will likely undermine much of the conventional wisdom regarding the evolution of web services. At the heart of this revolution is the Data-Centric Web.

Since it’s creation in the early 1990’s, the web has been a document-centric system and all of its core technologies have been designed to support that vision. HTTP’s simple GET, PUT, POST API’s were developed to facilitate the sharing of documents across a network. HTML was created to format those documents and URLs (aka URIs) were created to easily identify and access those documents and DNS was created to match URL’s to specific file servers on the network. While these technologies have evolved dramatically over the years, they remain fundamentally document-centric.

The Data-Centric Web is different. While the document-centric web revolves and documents and assumes that humans are the main consumers of information, the Data-Centric Web revolves around individual data elements and assumes that computers are its main consumers.

This shift in focus from documents to data and from humans to computers is simple and yet profound. Just imagine a world in which every piece of data is immediately and automatically accessible from any computer via the web using a simple, universal set of protocols and formats. Indeed, such a vision has long represented the “holy grail” of Enterprise Application Integration (EAI) and yet attempts to realize this vision have been woefully inadequate to date.

However it appears increasingly likely that the Data-Centric Web will become a reality thanks to the introduction of a few new technologies as well as the “hijacking” of some existing ones. The most important new technology for the Data Centric Web is eXtensible Markup Language or XML. XML is an offshoot of HTML, but unlike HTML which is designed to format and present text, XML is designed to identify and structure data. Using XML, programmers can identify individual data elements and put those elements into context within a larger taxonomy (e.g. This data element is the company’s zip code and it is part of their shipping address.), thus turning plain text into data.

But turning text into data is only half the battle. Once text is turned into data, there still needs to be a way to find and share this data. That’s where the Data-Centric Web starts hijacking. Specially, the Data-Centric Web hijacks HTTP,URL’s and DNS. These existing technologies have been stalwarts of the document-centric web, seamlessly interconnecting billions of pages of text across the internet and the Data-Centric Web simply hijacks them and uses them to interconnect data instead of text.

For example, if you type http://finance.yahoo.com/q?d=v1&s=ibm into your web browser you will, via the magic of HTTP, URL’s, and DNS (among other things), be taken to a web page on Yahoo Finance which gives you IBM’s current stock quote as well as a bunch of other related information. While to a human being this stock quote page looks great, to a computer it looks like a random jumble of text of images. However, suppose there was a URL like this http://data.finance.yahoo.com/quotes/ibm/last_price and that instead of returning a large page filled with information a GET request simply returned “99” in XML. Now imagine that every data element in the world is formatted in XML and has it’s own URL. The web has just become a giant library of data, not just documents.

Once the web becomes a library of data elements, easily and universally accessible via URLs, the very nature of data exchange and integration will be transformed. Need to integrate the current temperature in New York City into that Java program you are writing, just paste in the URL from Weather.com. Need to get a stock price into Excel? Just type in a URL from Yahoo! Finance. A website wants your shipping address? Just type in a URL (and when you change your shipping address, all the businesses that have your “shipping” URL will automatically get that change). Need to give a business partner access to your inventory levels? Just e-mail them a URL and let them figure out how they want to integrate it.

This last example highlights a crucial advantage of the Data-Centric Web. Because the Data-Centric Web relies on the web to provide common API, transport, and naming, the complexity of bilateral data integration efforts declines dramatically. Businesses can simply define their data once and then let their partners pick and choose how they want to access it.

But wait you might say, isn’t this what web services are all about? Aren’t RPC-based web services supposed to transform the web from a document-centric model to a service centric model and since you can usually access data via a service, aren’t web services essentially the same thing as the Data-Centric Web?

Yes, RPC-based web services can accomplish the same end goal of accessing data over the Internet, but web services lack several aspects of the Data-Centric Web. 1) Web services do not make use of a universal API, such as HTTP, rather they let users define their own API and, if they want to, describe that API in a standardized way (via WSDL). The power of the Data-Centric web is that it uses a universal API (HTTP) and a standardized naming convention (URLs). I know its can of strange to say that web services standards aren’t standardized but they aren’t! 2) Web services are not integrated into the infrastructure of the web. Over the past 10 years, the document-centric web has integrated itself not only into the Internet but into applications and databases around the world. Thousands of interfaces have been written to the web and most of today’s most popular applications are, if not web-centric, fully web aware. For example, Microsoft Office has a powerful set of components that enable users to access the web from within its applications. By being fully “web-compliant” the Data Centric Web can take advantage of this massive pre-existing infrastructure as well as all of the skills and knowledge of those that maintain and contribute to it. True, Web Services are rapidly being adopted and integrated into the infrastructure but it will be many years before they approach the level of integration that the web itself has already achieved. 3) Web Services are much less efficient than the Data-Centric Web. Web Services are great for complex queries and business operations which lots of parameters, but they are overkill for simple data access operations. What takes thousands of lines of code and lots of cycles to accomplish with Web Services can be accomplished with one URL and a few cycles in the Data-Centric Web. In fact, the web-unfriendly nature of RPC-based Web Services has set off a debate within the Web Services community and there are many efforts underway to make Web Services more web-friendly, mostly by using the concepts embodied within the Data-Centric Web. (This debate is often referred to as the REST vs. RPC debate, something I hope to write more about in the future.)

So how close are we to realizing the promises of the data-centric web? Closer than you might think. Given that the Data Centric Web hijacks much of the infrastructure built for the document-centric web, the key components are already in place. Indeed, many programmers have been employing the principals of the Data Centric Web in their own applications for sometime. What is needed for wide adoption of the Data-Centric web are just a few “bridges” from the data world to the web world. These bridges would enable databases and applications to easily publish data elements to web servers, which in turn will assign those elements URLs and thus make them accessible to the entire web. Some of these bridges will probably be built by the open source community, others by enterprising young start-ups. (Castbridge is a particularly interesting start-up focused on this area).

No matter who builds them, these bridges will unleash a new spurt of innovation and productivity on the web. Data elements will be integrated into applications and documents around the world with the ease and simplicity of typing in a URL and the whole system will work with the robustness and availability that we have come to expect from the web. I may be a geek, but I can’t wait!

January 27, 2004 in EAI, Middleware | Permalink | Comments (0)

01/26/2004

Software Stock Spreadsheet

Download file

I use this spreadsheet to help spot potential long/short ideas. It's a fairly comprehensive list of the all the public enterprise software companies out there (including some Internet-related companies for good measure) and the sheet makes use of Microsoft's MSN Money automatic Stock price download toolbar within Excel to get current stock prices for each company so it updates easily. Typically, I'll update the prices and then for a select group of companies I'll do some more in-depth balance sheet analysis with a focus on enterprise value, tangible book and revenue growth to narrow down the possible investment candidates. As I said in my last post, there are increasingly many more short (for example, check out Autonomy (AUTN)) than long opportunities out there, but shorting in this environment is very risky given the large amount of money flowing into the sector.

January 26, 2004 in Stocks | Permalink | Comments (0)

Software Stock Universe

I was updating a copy of my Software Stock Universe Spreadsheet today (attached below) and it got me thinking about the past year. In October of 2002 I used this same spreadsheet to try and find some PIPE ideas for Mobius. At that point, a large number of the small-cap enterprise software stocks were selling at or below tangible book value and I figured that a number of them would make good investments given, as Benjamin Graham might put it, the high margin of safety embedded in the stocks.

My investment criteria at the time was pretty simply: 1) it had to be trading near or below tangible book 2) it had to have at least 6 quarters of cash in bank 3) it had to have only marginally negative cash flow, i.e. by the time the cycle turned it would still have a lot of cash 4) it had to have an enterprise value to tangible book ratio of less than 1.5. and 5) It had to be in a business that I thought had long term promise. Using that criteria we whittled a list of 300+ names down to just 23 candidates. Unfortunately for Mobius, in turned out that companies that met these criteria weren’t really great PIPE candidates, so we ended up shelving the idea of doing any PIPEs at Mobius at the time.

I did however put together a list of the 23 stocks we felt were most promising at the time entitled "Pipe Dreams" and I was thinking of that list as I updated the spreadsheet today. Specifically, the list included AGIL, ARTG, CALP, CHRD, CRA, CRGN, CTRA, DCLK, ECLP, ET, IDXC, LWSN, NETE, NGEN, NOVL, NYFX, PRSE, PRXL, QTRN, QVDX, RCOM, TPRS, and TTEC. As you can see, there were a couple healthcare related companies in there because the guy I worked with on putting the list together also looked at healthcare related deals. There were also a couple Internet-related names that looked particularly attractive.

Taken as a whole, if you had invested an equal amount in each stock on the list you would have generated a return of 220%+ since 10/29/04. That compares with about a 64% return (before fees) if you had just invested in a NASDAQ index fund or QQQ.

The lowest performing stock and the only loser was Tripos (TRPS) at -29% (open source bioinformatics software is killing them), but the next worse performer was RoweCom (RCOM) and that was up 79%. The best performer was Netegrity (NETE), up almost 730%!

What's more amazing about this performance is that in most cases, these companies did not see dramatic changes in their operating businesses. Today many are still either losing money or have marginally positive cash flow and most had flat to slightly positive revenue growth in 2003.

What accounts for their fantastic performance then? Several things: 1) When a company trades below tangible book the market is essentially leaving it for dead. Thus, most of the companies on our list had been left for dead and when it became apparent that they were in fact not dead, the market responded by giving them a multiple more in line with their market comparables 2) Most of these companies had large cash balances and once it was clear that they would not be burning through all this cash, the market began to look at the enterprise value of these deals 3) As the market rallied the poor liquidity in these stocks help drive them higher because there wasn't a lot of stock for sale and small increases in bid-side volumes thus resulted in large price gains.

The natural question is: are there any stocks today that look similarly attractive to the ones you singled out in 10/02. Unfortunately, at first glace it looks like the answer right now is no. Almost all of these stocks have had incredible run-ups in the last 12 months. I am beginning to see some interesting short ideas, but I don't know if I have the stomach to short stocks into this rally as the rally looks to be more liquidity driven than anything else.

Another natural question you might have: sure you made a list of stocks in 10/02 but did you actually buy any of them. As I said, we didn't actually buy any of them at Mobius as most of these companies didn't need any private placements of capital, but I did buy two of the stocks for my personal portfolio. The first was E*Trade, a company I covered on wall street. At the time E*Trade was trading at below tangible book and while I knew it didn't have the upside of a pure software/internet stock, I knew that almost any bank in the country will acquire another bank for 1.2-1.5 tangible book, so I saw the stock as a very low risk play. E*Trade did very well, up 279%, 7th best out of the 23 stocks on the list. I sold my E*Trade stock at about $10/share last year as soon as I got cap gains treatment though, so I missed out on it's most recent run to $14. The other stock I bought was Netegrity, the top performer on the list. Unfortunately I also sold that last year as soon as I got cap gains treatment as well (about $5/Share) and I missed out on much of the run-up. In both cases I sold because I felt the stocks were fully valued, only to see both of them run-up considerably higher. There are two lessons I have taken away from that 1) Never underestimate the market's momentum 2) In personal investing always issue stop-loss trading instructions and walk them up behind the stock!

I am only holding onto one long position in enterprise software right now and that’s Interwoven, a company I know well from my Wall Street days. I bought IWOV at $5.80/share shortly before we did the PIPE analysis in 2002 using pretty much the same investment thesis and then bought some more in Q1 03. It no longer matches up with my valuation thesis, but I haven’t sold yet because I still like the company/space and I learned from my prior mistakes and have simply set a stop loss. Stock was up almost 3% today so it looks like I made the right choice!

January 26, 2004 in Stocks | Permalink | Comments (0)

01/01/2004

Ripping Off The Little Guy, One Trade At a Time

I first wrote this piece in 2000, but I think most of my advice for retail investors still applies. Thanks to the growing popularity of ECN's as well as renewed attention from regulators, wholesalers are not finding it as easy to screw retail investors, but it still happens every day.


How Retail Investors, Especially Internet Investors, are having their own trades used against them

Every working day, hundreds of thousands of retail investors enter orders to buy and sell stocks and then put their faith in a system that they believe is set up to get them the best price. However, the dark secret of Wall Street is that retail investors are getting far from the best prices and almost everyone on Wall Street knows it.

While this issue affects almost all retail investors, it affects Internet investors in particular as they trade in some of the most active and volatile stocks on the market and this activity provides ample room for Wall Street middlemen to make plenty of money, often at the expense of their own customers.

As a former Wall Street analyst who actively covered the growth of the Online Trading industry, I saw the emergence of these questionable trading practices first hand and now have watched them grow to the point where it seems as though they have become an accepted way of doing business, even though these practices are fundamentally unfair to individual investors and highly questionable from both a legal and moral perspective.

The Rise of the Wholesaler
While there are a number of trends that have pushed these practices to the forefront, the most important has been the rise of the so-called “wholesalers”. Wholesalers are large trading firms that aggregate orders from retail brokers and then execute those orders on behalf of the brokers and, by extension, their retail clients. Indeed, many retail investors are often surprised to learn that their trade is not completed by their own brokerage, but instead is “outsourced” to one of a handful of powerful wholesalers.

Today these wholesalers have become the primary means by which almost all retail investor orders are executed in the market. Many of the largest wholesalers now account for 30, 40, even greater than 50% of the trading volume in particular stocks and wholesalers often dominate trading in the hottest Internet stocks.

Orders = Information
When brokerages send their customers’ orders to a wholesaler, they are sending them valuable information. That’s because these orders represent intentions to buy or sell stocks at specific prices.

Professional investors, such as mutual fund managers, guard such intentions to buy and sell stocks very carefully. After all, if anyone in the market were to find out that a large mutual fund, such as Fidelity or Janus, was trying to sell a big position in a stock, the stock’s price could fall dramatically.

While professional investors take strong precautions to ensure that no one in general finds out about their intentions, they take particular care to prevent their own brokerage firm from finding out their true intentions. That’s because professional investors realize that if the trader at their brokerage firm knew that they wanted to sell a large amount of stock, the trader would likely try to make a profit for their firm by buying or selling stock in advance of completing the investor’s order.

Thus, in the professional investing world, if an investor planned on selling 10,000,000 shares they would, in all likelihood, never tell their broker their true intentions, but instead would “ parcel out” the trade in smaller increments, while always trying to keep the brokerage firm’s trader guessing as to whether or not there was more to come.

The Power of AggregationAt first blush it seems as though retail investors don’t have the same problem. After all, a decision to sell 100 or 1000 shares in most cases isn’t going to move the market.

However, with the rise of the wholesalers it’s become possible for one firm to aggregate tens of thousands of orders in a particular stock. By aggregating all of the individual retail investor orders, wholesalers are able to get some pretty valuable information, information that is often better than anything they can learn from professional investors.

As the wholesalers get more and more orders, in many cases over 50% of the orders in a particular stock, it gets easier and easier for them to determine whether or not a stock might go up or down.

Just imagine the stock market as a giant jig-saw puzzle. If you have 5% of a puzzle’s pieces it’s tough to guess what the picture is, but if you have 40 or 50% of the pieces, it’s a no-brainer.

In a similar way, once an experienced wholesaler controls 30-50% of the order flow in a particular stock, taking advantage of this information to make money for their own account is like shooting fish in a barrel.

The most disturbing part of all this is that the information that wholesaler is using to make money for themselves is coming from retail customers who have no idea that the their information is being used to make someone else rich and often at their expense!

Indeed, there are now a whole series of widely accepted trading techniques used on Wall Street that are explicitly based on supposedly confidential customer information.

Good Morning, You’re Screwed
One of the most basic examples of such techniques occurs just before a stock opens for trading in the morning. Let’s say a major wholesaler, one that typically is the #1 trader in a stock, has a large number of market orders to buy the stock at the open.

Based on this information, the wholesaler can be reasonably certain that the stock is going to open at a higher price than it closed the day before. Given this, the wholesaler can buy as much stock as possible on “after hours” markets, such as Instinet and the Island ECN, prior to the general market open. Then, just prior to the market opening, they would signal to the rest of the market that they had a lot of stock to buy by raising the quote that they publicly display to the rest of the market. (The most blatant way to accomplish is to do what’s called “crossing & locking” the market, which essentially means aggressively moving a quote up or down so fast that the whole market has to reset.)

Given that this particular wholesaler is the #1 trader in the stock, the other traders in the market are likely to stand aside and let him continue raising the quote as the other firms can only assume that he has a lot of orders buy the stock.

The wholesaler’s goal is to increase the opening price of the stock to the point where it is significantly higher than the price of the stock that they just bought on the after hours market. This way they can easily flip the stock they just bought on the after hours market to the individual investors for a significantly higher price than they just paid for it. Not bad for a few minutes work.

Of course, the wholesaler will claim they are merely doing what the retail investors wanted, which is selling them stock at the opening market price, but it’s probably safe to say that 100% of the retail investors would have preferred to get the cheaper “after hours” price.

What’s more, the opening price would never have been higher if the wholesaler hadn’t acted on the information provided to it by the retail investors. Talk about paying for your own participation!

Beware of Price Improvement
It gets worse. Examples like “managing” the open price are common knowledge on Wall Street, but they still entail some risk. In the pursuit of almost riskless arbitrage profits, many wholesalers are now using a tactic called “price improvement”.

“Price improvement” results when a wholesaler actually pays more for a stock than the current market price. This means that if a stock is being quoted at a $30 bid (buy) and a $30 1/2 ask (sell) and a customer wants to sell a stock, the wholesaler will actually “price improve” their order and allow them to sell for $30 1/16. For the seller, this seems like great news as they appear to have gotten an even better price than the best offer to buy (or bid) in the market.

The reality of the situation is a lot less rosy, for both the seller AND the buyer in the transaction. What typically happens in “price improvement” is that the wholesaler currently has a limit order on its book that is “at the market”. Limit orders are instructions from customers to buy or sell only at a specific price. In this case, that would mean that the wholesaler has a limit order from a retail investor to buy the stock at $30.

You might ask yourself, if the wholesaler had an “at the market” limit order to buy at $30 from a customer, why didn’t they just match the limit order to buy with the original market order to sell? That’s a good question and the tip-off that “price improvement” is not what it’s cracked up to be.

Rather than match the orders, what wholesalers typically do is that they unilaterally increase the bid by 1/16 (the minimum allowed by law) and then they execute the trade, not on behalf of a customer, but on behalf of their own account. In our example, it means that the wholesaler would buy the stock being sold at 30 1/16, rather than simply crossing the order with the open limit order at $30.

At first look, this strategy makes no sense: Why would the market maker buy the stock at $30 1/16 for their own account and take on the risk that the stock would fall, when they could simply do a riskless cross of the trades? It seems like wild speculation. But in reality, the market maker is not speculating at all, and in fact they are executing the trading equivalent of a “slam dunk”.

How could this be? First off, the market maker’s maximum loss on the stock is actually limited to 1/16, that’s because they still have a valid limit order at $30. If the price ever started to fall, they would simply sell their stock to the customer who has the limit order.

While their downside is limited to 1/16, the market maker never would have made the trade in the first place if they didn’t have a very good idea that the stock was in fact going up.

How would they know that? Once again, given that they control a large portion of the trading volume in the stock and have been given hundreds, perhaps thousands, of open orders to buy or sell at specific prices, the market maker has at their disposal a virtual treasure trove of information that only they can look at. For example, if the market maker saw a huge number of open buy orders and a decreasing amount of open sell orders, they might be reasonably confident that the stock was going up.

Thus in our “price improvement” example, the seller didn’t really get the best price, because in all likelihood the market was about to move sharply higher, while the customer with the open limit order to buy never even got a chance to buy as the market maker simply “stepped in front” of their order and then used them as a backstop in case the market turned. How’s that for a customer friendly trading strategy?

I could go on and on with more examples of how wholesalers routinely use customer information to improve their own trading profits, but suffice it to say that this is a widespread practice throughout Wall Street today that is generally accepted as part of the business.

Online Brokers to the Rescue?
One would think that if Individual investors were really being taken advantage of so badly, that their brokerages would come to the rescue. After all, they do have a moral and legal responsibility to see that customers get the best execution possible.

Problem is, the online brokers are in on the action. In return for directing their trades to these wholesalers, Online Brokers receive “payments for order flow” (also know as rebates). These rebates are little more than kickbacks of the excess profits that wholesalers are able to achieve by using the information unwittingly provided to them by retail investors.

Thus, the online brokerages have very little incentive to cry foul because many of them are getting millions of dollars in payments a month from the wholesalers. Unwilling to shoot themselves in foot by questioning the legality or morality of the current gravy train, the online brokers prefer to look the other way and collect their checks.

Indeed, several of the largest online brokers actually own major wholesalers. By owning the wholesalers, these brokers capture 100% of the profits from the “information arbitrage” that is, disturbingly, perpetrated on their own customers. This strategy in essence enables them to secretly “charge” far more per trade than their publicly quoted prices.

What Retail Investors Can Do
Given that almost all of the established interests on Wall Street don’t seem to care that much of the market has degenerated into what appears to be an organized front-running operation, what can individual investors do to protect themselves?

Unfortunately, as long as wholesalers and traders in general are allowed to use customer order information to benefit their own trading activities, retail investors will be still be at their mercy. There are however a few things that investors can do to protect themselves, short of getting the SEC and Congress to ban the current practices. These include:

1. Don’t Enter a Market Order To be Executed at the Open: As we covered earlier, market orders at the open are like a license to steal for wholesalers because the wholesalers can trade for their own account prior to the opening and then influence the opening price.
2. Don’t Trade Within The First Hour of The Open: Given the tremendous games that go on during the open, I would recommend that, if they can help it, most investors not trade at all within the first hour or so of trading as it takes that long for much of the market open gamesmanship to play itself out.
3. Always Enter Limit Orders Whenever Practical: Given that market orders are far more malleable in the hands of a wholesaler than a limit order, its wise to use limits as much as possible. Limit orders won’t protect you from being used in “price improvement” schemes, but they are better than nothing.
4. Move to a Broker That Uses ECNs: Outside of using limit orders the best thing for retail investors to do is to use a broker which books their limit orders on one of the major Electronic Communications Networks or ECNs. ECNs are computers that dispassionately match orders between buyers and sellers. ECNs do not trade for their own account so they have no profit motive on the trade. As a result you can expect them to handle your order in as unbiased and as efficient a manner as possible. Of the major online brokers only one, Datek Online, routinely uses an ECN to book limit orders. While no broker is perfect, Datek’s execution is about as good as you are going to get. Outside of Datek, there are a few other firms, such as Tradescape (which SOFTBANK has an investment in) and CyberCorp (now owned by Schwab) that provide access to multiple ECNs, but they are generally targeted at the most active traders and thus may not appropriate for all investors.

Someday, We Won’t Get Screwed This Way
Someday soon the government or the markets themselves may take a close look at these issues and maybe they will make some changes. Some of the possible changes they could make would be to either require market makers to book their limit orders to a third party ECN or establish a Chinese wall between their customer order book and their traders.

There’s a slim chance that the wholesalers, and other market makers like them, will realize the tenuous ethical and legal ground they are standing on and proactively change their practices, but given that this is the same crowd that routinely fixed prices on the NASDAQ for decades, its hard to see how they could have a sudden attack of conscience.

So until such time that either the government wakes up or hell freezes over, it’s “investor beware” for retail investors. Never assume that your order in being held in confidence and always assume that someone else may use your order to improve their own trading decisions. It’s a sad fact that Wall Street has come to this, but if you factor this into your trades you’ll be a better investor for it.

January 1, 2004 in Stocks, Wall Street | Permalink | Comments (4)