« January 2004 | Main | March 2004 »
02/17/2004
RSS: A Big Success In Danger of Failure
There’s a lot of hoopla out these days about RSS. RSS is an XML-based standard for summarizing and ultimately syndicating web-site content. Adoption and usage of RSS has taken off in the past few years leading some to suggest that it will play a central role in transforming the web from a random collection of websites into a paradise of personalized data streams. However the seeds of RSS’s impending failure are being sown by its very success and only some serious improvements in the standard will save it from a pre-mature death.
Push 2.0?
The roots of RSS go all the way back to the infamous “push” revolution of late 1996/ early 1997. At that point in time, Pointcast captured the technology world’s imagination with a vision of the web in which relevant, personalized content would be “pushed” to end users freeing them from the drudgery of actually having to visit individual websites. The revolution reached its apex in February of 1997 when Wired Magazine published a “Push” cover story in which they dramatically declared the web dead and “push” the heir apparent. Soon technology heavyweights such as Microsoft were pushing their own “push” platforms and for a brief moment in time the “push revolution” actually looked like it might happen. Then, almost as quickly as took off, the push revolution imploded. There doesn’t appear to be one single cause of the implosion (outside of Wired’s endorsement), some say it was the inability to agree on standards while others finger clumsy and proprietary “push” software, but whatever the reasons “push” turned out to be a big yawn for most consumers. Like any other fad, they toyed with it for a few months and then moved on the big next thing. Push was dead.
Or was it? For while Push, as conceived of by PointCast, Marimba and Microsoft had died an ugly and (most would say richly deserved) public death, the early seeds of a much different kind of push, one embodied by RSS, had been planted in the minds of its eventual creators. From the outset, RSS was far different from the original “push” platforms. Instead of a complicated proprietary software platform designed to capture revenue from content providers, RSS was just a simple text-based standard. In fact, from a technical perspective RSS was actually much more “pull” than “push” (RSS clients must poll sites to get the latest content updates) but from the end-user’s perspective, the effect was basically the same. As an unfunded, collective effort RSS lacked huge marketing and development budgets, and so, outside of a few passionate advocates, it remained relatively unknown many years after its initial creation.
Recently though, RSS has emerged from its relative obscurity, thanks in large part to the growing popularity of RSS “readers” such as Feedemon, Newsgator, and Sharpreader. These readers allows users to subscribe to several RSS “feeds” at once, thereby consolidating information from around the web into one highly efficient, highly personalized, and easy-to-use interface. With it’s newfound popularity, proponents of RSS have begun hailing it as the foundation for creating a much more personalized and relevant web experience which will ultimately transform the web from an impenetrable clutter of passive websites, into a constant, personalized stream of highly relevant data that can reach a user no matter where they are or what device they are using.
Back to the Future?
Such rhetoric is reminiscent of the “push” craze, but this time it may have a bit more substance. The creators of RSS clearly learned a lot from push’s failures and they have incorporated a number of features which suggest that RSS will not suffer the same fate. Unlike “push”, RSS is web friendly. It uses the many of same protocols and standards the power the web today and uses them in the classic REST-based “request/response” architecture that underpins web. RSS is also an open standard that anyone is free to use in whatever way they see fit. This openness is directly responsible for the large crop of diverse RSS readers and the growing base of RSS friendly web sites and applications. Thus, by embracing the web instead of attempting to replace it, RSS has been able to leverage the web to help spur its own adoption.
One measure of RSS’s success is the number of RSS compliant, feeds or channels available on the web. At Syndicat8.com, a large aggregator of RSS feeds, the total number of feeds listed has grown over 2000% in just 2.5 years from about 2,500 in the middle of 2001 to almost 53,000 in February of 2004. The growth rate also appears to be accelerating as a record 7,326 feeds were added in January of 2004, which is 2X the previous monthly record.
A Victim of Its Own Success
The irony of RSS’s success though is that this same success may ultimately contribute to its failure. To understand why this might be the case, it helps to imagine the RSS community as a giant Cable TV operator. From this perspective, RSS has now has tens of thousands of channels and will probably hundreds of thousands of channels by the end of the year. While some of the channels are branded, most are little known blogs and websites. Now imagine that you want to tune into channels about, let’s say, Cricket. Sure there will probably be a few channels with 100% of their content dedicated to Cricket, but most of the Cricket information will inevitably be spread out in bits and pieces across the 100,000’s of channels. Thus, in order to get all of the Cricket information you will have to tune into hundreds, if not thousands, of channels and then try to filter out all the “noise” or irrelevant programs that have nothing to do with Cricket. That’s a lot of channel surfing!
The problem is only going to get worse. Each day as the number of RSS channels grows, the “noise” created by these different channels (especially by individual blogs which often have lots of small posts on widely disparate topics) also grows, making it more and more difficult for users to actually realize the “personalized” promise of RSS. After all, what’s the point of sifting through thousands of articles with your reader just to find the ten that interest you? You might as well just go back to visiting individual web sites.
Searching In Vain
What RSS desperately needs are enhancements that will allow users to take advantage of the breadth of RSS feeds without being buried in irrelevant information. One potential solution is to apply search technologies, such as key word filters, to incoming articles (such as pubsub.com is doing). This approach has two main problems: 1) The majority of RSS feeds include just short summaries, not the entire article, which means that 95% of the content can’t even be indexed. 2) While key-word filters can reduce the number of irrelevant articles, they will still become overwhelmed given a sufficiently large number of feeds. This “information overload” problem is not unique to RSS but one of the primary problems of the search industry where the dirty secret is that the quality of search results generally declines the more documents you have to search.
Classification and Taxonomies to the Rescue
While search technology may not solve the “information overload” problem, its closely related cousins, classification and taxonomies, may have just what it takes. Classification technology uses advanced statistical models to automatically assign categories to content. These categories can be stored as meta-data with the article. Taxonomy technology creates detailed tree structures that establish the hierarchical relationships between different categories. A venerable example of these two technologies working together is Yahoo!’s Website Directory. Here Yahoo has created a taxonomy, or hierarchical list of categories, of Internet sites. Yahoo has then used classification technology to assign each web site one or more categories within the taxonomy. With the help of these two technologies, a user can sort through millions of internet sites to find just those websites that deal with say, Cricket, in just a couple of clicks.
It’s easy to see how RSS could benefit from the same technology. Assigning articles to categories and associating them with taxonomies will allow users to subscribe to “Meta-feeds” that are based on categories of interest, not specific sites. With such a system in place, users will be able to have their cake and eat it to as they will effectively be subscribing to all RSS channels at once, but due to the use of categories they will only see those pieces of information that are personally relevant. Bye-bye noise!
In fact, the authors of the RSS anticipated the importance of categories and taxonomies early on and the standard actually supports including both category and taxonomy information within an RSS message, so the good news is that RSS is already “category and taxonomy ready”.
What Do You Really Mean?
But there’s a catch. Even though RSS supports the inclusion of categories and taxonomies, there’s no standard for how to determine what category an article should be in or which taxonomy to use. Thus there’s no guarantee that that two sites with very similar articles will categorize them the same way or use the same taxonomy. This raises the very real prospect that, for example, the “Football” category will contain a jumbled group of articles including articles on both the New England Patriots and Manchester United. Such as situation leads us back to an environment filled with “noise” and thus no better off when we started.
The theoretical solution to this problem is get everyone in a room and agree on a common way to establish categories and on a universal taxonomy. Unfortunately, despite the best efforts of academics around the world, this has so far proven impossible. Another idea might be to try and figure out a way to map relationships between different concepts and taxonomies and then provide some kind secret decoder ring that enables computers to infer how everything is interrelated. This is basically what the Semantic Web movement is trying to do. This sounds great, but it will likely be a long time before the Semantic Web is perfected and everyone will easily lose patience with RSS before then. (There is actually a big debate within the RSS community over how Semantic-web centric RSS should be.)
Meta-Directories And Meta-Feeds
The practical solution will likely be to create a series of meta-directories that collect RSS feeds and then apply their own classification tools and taxonomies to those feeds. These intermediaries would then either publish new “meta-feeds” based on particular categories or they would return the category and taxonomy meta-data to the original publisher which would then incorporate the metadata into their own feeds.
There actually is strong precedent for such intermediaries. In the publishing world, major information services like Reuters and Thompson have divisions that aggregate information from disparate sources, classify the information and then resell those classified news feeds. There are also traditional syndicators, such as United Media, who collect content and then redistribute it to other publications. In addition to these establish intermediaries, some RSS-focused start-ups such as Syndic8 and pubsub.com also looked poised to fulfill these roles should they choose to do so.
Even if these meta-directories are created, it’s not clear that the RSS community will embrace them as they introduce a centralized intermediary into an otherwise highly decentralized and simplistic system. However, it is clear that without the use of meta-directories and their standardized classifications and taxonomies the RSS community is in danger of collapsing under the weight of its own success and becoming the “push” of 2004. Let’s hope they learned from the mistakes of their forefathers.
February 17, 2004 in Blogs, Content Managment, Database, RSS | Permalink | Comments (13)
02/15/2004
"Low-end" EAI Is Where The Action's At
While most of the attention in the Enterprise Application Integration (EAI) space has been focused on the development of high-end features such as business activity monitoring and business process management, some of the most interesting innovations are actually occurring at the low-end of the EAI market.
High-End For a Reason
Historically there has been no such thing as “low-end” EAI. EAI, by its very nature, is a complex, costly and technically demanding space that generally involves integrating high-value, high volume transactions systems. In such a demanding environment, failure is simply not an option. Thus, EAI software has typically been engineered, sold, and installed as a high-end product.
“High-end” is of course just another way of saying “very expensive” and EAI surely is that. The average EAI project supposedly costs $500,000 and that’s just to integrate two systems. Try to integrate multiple systems and you are soon easily talking about budgets in the millions of dollars.
Ideally, there would be a way to offer high-end EAI software at low prices, but unfortunately the economics simply don’t work. First off, the software engineering effort required to ensure a failsafe environment for high-volume transaction systems is non-trivial and therefore quite costly. Second, the infrastructure that vendors must build to sell, install, and service such high-end software is inherently expensive.
Thus, the very idea of low-end/low-priced EAI software was thought to be a pipedream and any vendor that was crazy enough to sell their software for $10,000 instead of $500,000 was thought to be on a fast path to going out of business.
A Volkswagen vs. A BMW
Despite the conventional wisdom that “low-end EAI software” is an uneconomic oxy-moron, there are in fact an increasing number of start-ups quietly pursuing this space. These start-ups believe they will be successful not because they are trying to replicate high-end EAI at a lower price, but because they are creating a new “low-end” market by offering a different product to an entirely different, and potentially much larger, market.
To be specific, these low-end EAI vendors differ from their high-end compatriots in several important aspects:
1. Focused on Data Sharing vs. Transactions: High-end EAI vendors have traditionally been focused on building failsafe, ACID-compliant, transaction systems that can handle a corporation’s most important and sensitive data. Low-end vendors do not even attempt to manage transactions, they simply enable basic data sharing between applications without guarantees, roll-backs or any other fancy features. Such software is much less robust than high-end offerings, but it’s also much less complicated and therefore easier to build and support.
2. User vs. Developer Centric: High-end EAI products are generally designed to be manipulated and administered by developers. They have extensive API’s, scripting languages and even visual development environments. Low-end EAI vendors are designing their products to be used by end-users or at worst, business analysts. By eliminating the need for skilled developers, the low-end software significant reduces set-up and maintenance costs.
3. Hijacking vs. building: Most high-end EAI products come with their own extensive messaging infrastructures that have been painstakingly built by their developers. In contrast, low-end EAI vendors try to “hijack” or leverage existing infrastructures, such as the web or instant messaging, to support their products.
4. Indirect vs. Direct: Selling big expensive software is a difficult and complex task. That’s why high-end EAI firms have expensive direct sales forces that can spend 6-9 months closing the average deal. In contrast, the low-end firms are trying to build indirect sales models that can leverage other companies’ sales channels. They can use these channels because their products are less complex to sell and install and their prices are low enough to make their product an attractive “add-on” sale to other products.
EAI For The Rest of Us
At this point you might be saying to yourself “no customer is going to be crazy enough to trust its mission critical systems to a non-transactional EAI platform that is sold by distributor and uses third party infrastructure for key components”. You’re right. Using low-end software for traditional EAI tasks, such as linking payment systems together, would be extremely foolish.
However, these low-end systems aren’t designed to go after the traditional EAI market, they are designed to go after a much different market, the market for ad-hoc intra and inter enterprise data sharing.
Today, only a fraction of intra and inter enterprise data sharing takes place via EAI systems. Instead, most data sharing takes place via e-mail or fax machines and the data involved is often stored in Microsoft Office documents or simple text files. A fairly typical example might be a sales forecasting exercise in which a Vice President of Sales e-mails out a spreadsheet to a group of Regional Directors and asks them to fill in their forecasts for the coming quarter. Each Director fills in a different spreadsheet and then e-mails it back to the VP who has a business analyst open each spreadsheet and combine all of the results into one master spreadsheet.
The Vice President could spend $500,000 on high-end EAI software to build a system for the real time collection and updating of sales forecasts, but spending $500K to automate this task just isn’t worth it. However, it would be worth it to spend $10K or $20K as that would free up the business analyst’s time to actually do analysis and dramatically improve the speed and accuracy of the data collection effort. This is precisely the market that the low-end EAI vendors are targeting.
Just how large is the market for this kind of low-end EAI software? It’s hard to tell exactly, but I challenge you to spend more than 5 minutes with a business executive talking about this software and not find at least a couple projects in their area that could make immediate use of it.
The beauty of these low-end EAI systems is that they make very basic EAI capabilities available for the most mundane applications and allow end-users to set up and tear down these ad-hoc integrations without IT’s involvement. It truly is EAI for the masses.
Early Pioneers
Despite the promise of low-end EAI, most of its vendors remain largely anonymous. One such vendor is an Australian firm called Webxcentric. They have built a low-end EAI system that allows end users to turn any Excel spreadsheet into a sophisticated data collection system. Using Webxcentric’s system, users can define data collection templates from within Excel using a simple wizard interface and then automatically e-mail those templates to end users who in turn fill in the templates via a web form. One of their customers is a large convenience store operator. The customer was having each of their store managers fax in a sales report at the end of each day. These faxes were then manually rekeyed into a SAP system (a surprisingly common practice). Using Webxcentric, the store managers simply updated a spreadsheet template and the results were then automatically fed into SAP.
Another low-end EAI vendor is CastBridge. CastBridge allows end users to publish and subscribe to data both inside and outside of their enterprise from within packaged applications, such as Microsoft Excel. I like CastBridge’s architecture so much that I made an investment in them last year. One of their early customers, a government in Asia, is using their software to link police stations and hospitals together to enable real time tracking of health and crime statistics. This is a project that they could have used high-end EAI software for, but they preferred the user-friendly, flexible, cost effective approach offered by low-end EAI.
In both cases, these low-end EAI vendors are not trying to displace existing high-end EAI installations, but to expand the overall EAI market by bringing automated data sharing to previously manual processes.
Low-end = Big Market
While in many ways these systems are highly inferior to high-end EAI software, they still get the job done to the customer’s satisfaction and they do so at a price point that is accessible to far more potential buyers.
By making basic EAI capabilities more accessible, the low-end vendors are dramatically expanding the overall EAI market size to encompass a wide range of manual data collection and dissemination processes that up until now were not cost effective to automate. This new market should provide both start-ups and incumbents with far more opportunities for growth than simply adding additional features on top of the high-end systems. Who ever thought the “low-end” could be such an interesting place to be?
February 15, 2004 in EAI, Middleware | Permalink | Comments (2)
02/14/2004
ELOY Update
Well, my ELOY trade did not last long. As I noted in my previous post, I was worried about the stock’s lack of liquidity as well as the “hidden” dilution/overhang from the preferred stock so I decided not to take anything more than a symbolic position and put a stop loss in at $5.75, $0.05 above my cost. As fate would have it, just a few days later the stock traded through $5.75 and my stop was executed … at $5.52/share.
How could my stop loss get executed at $5.52 when it was pegged at $5.75? Primarily because I used a stop instead of a stop limit order. I consciously did this because of the poor liquidity of the stock. I didn’t want the stock to gap down and cross-lock my limit, so I went with a stop order at $0.05 above my cost figuring that the if the stop was activated I would actually get a trade somewhere between $5.75 to $5.70. In fact, $5.75 was indeed the quoted bid for about 15 mins and 2,500 shares cleared at that price about 10 mins before I was filled at $5.52, but my order wasn’t executed. In all likelihood my order wasn’t executed because I made the trade via E*Trade. E*Trade still sells off their orders to wholesalers, who provide retail investors who terrible fills on a routine basis. (Something I have written about extensively in the past.) If I had made the order via Datek/Ameritrade, some if not all of the 1000 shares would likely have cleared at the $5.75 price thanks to Datek’s superior automated execution.
I should mention that ELOY did report Q4 2003 earnings on 2/9. They were in-line with the positive preannouncement and the stock basically did almost nothing the day after. They appear to be making solid progress in rebuilding their business but still burned over $1M in cash in the quarter. Thus, from a fundamental perspective ELOY’s prospects for affecting a successful short-term turn around still seem pretty bright. That said, I hate the stocks complete lack of liquidity (emphasized by my poor fill) and the preferred overhang, so I will just observe the from the sidelines. At least I will be able to give my friend (a Wall Street analyst who recommended the stock) a hard time about his pick.
February 14, 2004 in CRM, Stocks | Permalink | Comments (2)
02/03/2004
Trade of the Week: E-Loyalty (ELOY)
I purchased some shares of E-Loyalty (ELOY) yesterday. E-Loyalty is a fairly unremarkable Customer Relationship Management (CRM) consulting company. It was spun-out of a company called Technology Solutions in early 2000 in a crass and fairly typical attempt to capitalize on all things remotely Internet related (thus the E-Loyalty name). Its venture backers were TCV and Sutter-Hill, two later stage players who likely saw an opportunity to take a call center consulting company, gussy it up a bit and make some quick money.
Incredibly, E-Loyalty hit a high of almost $350/share in mid-February of 2000, just a couple weeks after the spin-out became effective, likely giving the VCs hundreds of millions in paper profits. But the worm quickly turned and the stock headed into a free fall. It was down 50% just a couple of months later and by the end of 2003 was trading at just $3.65/share, down almost 99% from its high.
It’s not that there haven’t been good reasons for the stock’s price decline. The company’s revenues declined from $212M in 2000, to $147M in 2001, to $87M in 2002, and what looks to be about $80M in 2003. GAAP losses in 2003 will probably be between $17M-$20M, up from last year’s $15M. The company only has about $29M in net cash on hand it looks like it will continue to bleed cash for at least a few more quarters.
So why in the world did I buy such a “winner”? First, and most importantly, it was recommended to me by a friend of mine who is an analyst and who use to cover the CRM software space. He knows that I like “fallen angles” with compelling valuations that are arguably in the midst of a turn around so he told me about ELOY. I tend to buy at least token positions in all the stocks he recommends because even if they go down I will at least be able to give him a hard time about it. It doesn’t hurt either that the last stock I bought on his advice, Chordiant (CHRD), made me some decent money. His advice along with a quick peak at the fundamentals on Yahoo! Finance was enough for me to buy 1,000 shares yesterday.
What I found when I looked on Yahoo! Finance was that from a financial standpoint, E-Loyalty had many of the characteristics that I look for when I screen software stocks for new ideas (although ELOY is clearly not a software stock). According to Yahoo, ELOY had a market cap of $39.9M and, as I quickly calculated, a net tangible book of $47.2M, meaning that the company was selling at 15% discount to tangible book which provides, as Benjamin Graham would say, a nice margin of safety. Net cash was $29M and operating cash flows, while negative, we only in the $2M-$3M/quarter range indicating that the company could operate for a minimum 2+ years without having to raise more capital. Revenues were likely to be about $80M in 2003, which means that the company an enterprise value to sales ratio of only 0.14.
While overall financial performance has indeed been terrible in the past few years, there were indications that revenues were either stabilizing or even growing. In fact, the company pre-announced a positive quarter earlier this month that resulted in a single day jump of over 36% from $3.56 to $4.85. Since then the stock had been climbing steadily upward as new buyers trickled in, many likely drawn by some of the same things I was now seeing.
Outside of the revenue growth, a few other things attracted me to the stock. One was that I had already seen stronger than expected Q4 software license sales at a number of CRM vendors, such as Seibel and E.piphany, indicating that the CRM software market was indeed coming to back to life a bit (and with it, presumably CRM consulting) and another was the fact that consulting firms tend to have highly leveraged exposure to incremental revenues. What I mean by this is that because consulting firms must eat the cost of unbillable consultants, any increases in the utilization of consultants drops almost entirely to the bottom line. With gross margins of only 16% in Q3 03, ELOY clearly had very poor utilization (healthy consulting firms have around a 50% gross margin). Given this, even modest revenue growth would likly have a significant impact on the bottom line, possibly generating the company’s first ever GAAP profitable quarter sometime in 2004 and my experience with these kind of stocks suggests that once they hit GAAP profitability they tend to experience another leg of significant price appreciation as the market bids them up into a comparable multiple range vs. their competitors. One final factor that attracted me was that the stock was very thinly traded (only 17,000 shares/day). This lack of liquidity suggests that if the stock has in fact turned the corner, it could run up very rapidly and that even slightly positive news could generate significant gains.
As I said, this quick analysis, plus the opportunity to give my friend merciless grief if the stock went down, was enough to buy the token 1,000 shares of the stock yesterday. I did resolve however to spend some time this morning doing a more thrurough analysis of the stock to see if I should try to take a meaningful position in it.
With that in mind, this morning I went through a number of Eloyalty’s recent SEC filings. As often happens, this closer level of analysis revealed some negative surprises. The first and most important surprise was that the company had actually issued about $21M in 7% Preferred stock in December 2001, mostly to its VCs TCV and Sutter Hill. I should have caught this when I looked at the balance sheet on Yahoo! Finance but I didn’t (preferred stock is listed in the shareholder’s equity section, not as debt). The net effect of this was to dramatically reduce the “margin of safety” in ELOY by putting $21M in preferred stock ahead of the common. Thus rather than trading at a discount to tangible book of 15%, ELOY is actually trading at over 1.5X tangible book. Since the preferred is convertible into common it means that there were about 65% more shares outstanding (on an “as converted” basis) than I thought meaning that despite what Yahoo! Finance said, the effective market cap of the company was $63.6M instead of $39.9M. (In my experience incorrect market caps are a consistent issue with Yahoo! Finance, I highly recommend taking their market cap numbers with a grain of salt. It makes you wonder how many people are making decisions using the wrong numbers…)
Another negative surprise was that the preferred was sold to TCV/Sutterhill at about $5/share and has been freely registerable since mid-2002. With the stock now suddenly trading above their cost, there’s a risk that the VCs could start selling their shares or worse yet, distribute them all at once to their LPs, which would kill a highly illiquid stock like ELOY.
While these were significant negative surprises, it’s not clear that they outweigh the positives. Indeed, most of the factors that were initially appealing about ELOY proved true on closer inspective. Staff utilization was indeed depressed at around 60% thus indicating the potential for significant margin expansion should revenues grow. Revenue declines and negative cash flows were also decelerating sharply and the company could clearly become profitable with a little bit of wind at its back.
So what to do? On the one hand, the preferred stock really makes ELOY an unattractive holding. It eliminates what appeared to be a big margin of safety and it creates a huge (relative to its liquidity) overhang on the stock that could fall at any time. On the other hand, if ELOY’s revenue growth is for real, the stock could quickly turn GAAP profitable and start generating cash, something that would likely lead to significant stock price appreciation given what has happened to other stocks in similar situations recently.
My decision is made complicated by a few more facts: 1) I know that my friend will be talking up the stock to other people on the street potentially giving the stock a lot of trading momentum. (He even convinced his boss to buy it which is a very risky move.) 2) I bought the stock at $5.70 yesterday (E*Trade screwed me on my fill relative to the market but that’s another post…) and it closed at $6.27 today, up just under 10% in 48 hours. Given the stock had only average volume today that indicates to me that there are almost no sellers out there right now which is a good technical situation for the stock. 3) The company reports next Tuesday. Stocks typically are either strong or weak going into an earnings report and this stock is clearly strong. Thus there’s little reason for the positive trend to reverse itself prior to the report next week.
All that said, I am not going to add to the position. The liquidity will play against me if I try to buy a lot of shares and if bad news comes out I will get creamed. I am also scared to death of that overhang, especially in light of the fact that the VC’s are now sitting on a 25% gain. I suspect there will be pressure on them to start some limited sales shortly because the same thing happened to us at Mobius with a couple of our illiquid small-cap holdings. Net, net I am probably just going to put in a stop-loss at $5.75 (I have learned my lesson on stop-losses) and let it ride until just before the earnings call (2/10). I’ll reevaluate then and see if I want to risk the report or not.
February 3, 2004 in CRM, Stocks | Permalink | Comments (0)