Blog

The FCC Decides to Eviscerate the Neutral Internet

By David Pakman

Like many in the tech community, I was both shocked and dismayed at the FCC’s sudden about face on the basic principles of net neutrality.

“The principle that all Internet content should be treated equally as it flows through cables and pipes to consumers looks all but dead.” – The New York Times

FCC, in a Shift, Backs Fast Lanes for Web Traffic

If the FCC’s proposed rule-making goes forward, this will be the beginning of the end of the open and non-discriminated internet. This proposal so obviously favors large companies with deep pockets at the expense of new entrants and startups. An internet where all content is treated equally and routed without preference has served to create a culture of massive innovation and has witnessed trillions of dollars of wealth creation and millions of new jobs. Allowing giants to buy preferred access to end users will automatically diminish the quality of service of non-payers. On the internet, we know speed and responsiveness are necessary to deliver high-quality user experiences and to delight customers. This proposal would make it far more difficult for non-payers to deliver those types of experiences so essential for success. The FCC should immediately re-write their proposed rule-making and eliminate any notion of favorable treatment to any content or payer.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

The Future of Software is…Wicked Smaaht

By brianascher

smarter software

Software as a Service and cloud computing has been transformational for the software industry.  But compared to what is coming next, you ain’t seen nothing yet.  First, to appreciate where we are heading a quick review of where we’ve been is in order.  Back in the olden days of business software a software company sold you an application which you installed on your servers and desktops which made business processes more efficient, facilitated workflow, and sped up information retrieval.  As you used it this software accumulated data such as your customer records, financial results and manufacturing statistics.  If you wanted to deeply analyze this data for trends and insights you bought Business Intelligence or Analytics packages from a different set of software vendors so you could slice and dice your data, generate reports for executives, and hopefully decipher interesting trends about your business that you would then go act on.  In the early 2000s Software-as-a Service companies emerged and enabled you to “rent” business applications, rather than buy them, as your employees accessed them through the Internet and their web browsers.  This came with many advantages in total cost of ownership and manageability, but fundamentally most of the first SaaS applications were about workflow and data storage/retrieval just like their on-premise software forefathers.  In the last few years we’ve had a “Big Data” explosion and a host of new open source technologies like Hadoop, MapReduce, and Cassandra packaged by a set of new companies that help businesses manage and manipulate their ever expanding mountains of data.  Also emerging is a new generation of cloud based analytics companies that make it easy to slice, dice, and visualize big data sets.

So what’s the point of this history lesson?  The point is that, for the most part, all of these business applications and more recent Big Data tools have left the burden of capturing real business insight, making decisions, and taking action on the business customers themselves.  In essence, if you wanted real business value you had to create that value yourself by getting your employees to use the applications (which often means manual data input), have analysts mine and interpret the data, and ask managers and executives to make decisions based on what they see in the reports and charts.  For example, if your company used Sales Force Automation, whether it be Siebel on premise or Salesforce.com in the cloud, your sales reps had to diligently input data about their sales calls and management had to be smart about logging in to read the reports, suggest actions for each account and discern broader trends across the data.  A new breed of software company is emerging, however, that combines data science expertise with deep understanding of business problems.  I call them Data Driven Solutions.  These solutions use algorithmic data mining, not only on your own data but often on external third party data sets accessible by cloud ecosystems and APIs.  Data Driven Solutions make predictions about business functions, prescribe what to do next, and in many cases take action autonomously.  Trained analysts are not required to query databases but rather business users get answers directly from the software.  These answers typically feed seamlessly into the flow of business activity, often invisibly.  While this distinction may seem subtle, I believe it is fundamental and disruptive, and represents the future of software.  This is in no way the end of the SaaS, but in fact where SaaS is going next and presents massive opportunity to new SaaS innovators and a potential threat to incumbents who do not adapt.

Data Driven Solutions

8 Suggestions for Building Data Driven Applications

Think Moneyball, for everything.  Billy Beane of the Oakland A’s defied the conventional wisdom of traditional baseball talent scouts by recruiting players other teams underappreciated but whom he believed represented great return on investment.  He did this not by relying on his own brilliant sense of which players to recruit but by letting a math whiz run regression analysis on player statistics to figure out which lesser heralded stats were most predictive of winning baseball games.  The math predicted results and told him which players to acquire, predictions which Beane followed to his team’s competitive advantage.  Opportunities to apply this approach in business are practically everywhere.  6Sense* is a new SaaS company that analyzes B2B website traffic and third party data to predict which prospects are most likely to buy from you, what they will buy, when they will buy, and how much they will buy.  Like Beane, they don’t rely on rules of thumb in scoring prospects, such as whether the prospect downloaded a white paper, viewed lots of product detail webpages, or has “Procurement” in their job title.  6Sense has found that these heuristics yield only about 50% accurate forecasts which is not enough to compel a sales person to trust the results.  Instead, 6Sense uses a variety of machine learning statistical models to uncover the unexpected correlations which drive predictive accuracy up to 85-90% accuracy which definitely gets a sale rep’s attention.   Instead of being a chore to use like Sales Force Automation, 6Sense tells sales reps how to close more deals and earn more commissions.  Infer, Lattice, and C9 are also innovating in the area of predictive CRM solutions.  Use your domain expertise to figure out what problems to solve for your customers, but let the data lead to new and unexpected insights.

Build in Data Learning Loops   Google enjoys a very powerful form of Network Effect.   The more searches they run and resultant clicks they see the better they understand the intent of what a searcher wanted to find.  This makes their search algorithms better which earns them more user searches, which keeps the search quality/volume flywheel spinning.  This notion of a “learning loop” can be applied to many business settings as long as you find a way to “close the loop” and see how your prediction or answer actually fared.  For example, AppNexus* is an AdTech company that operates an exchange where publishers and ad networks on one side are matched by algorithms with advertisers and agencies on the other side to put the right ad in front of the right audience at the right time.  Learning loops are built in to the bidding and optimization algorithms which get the chance to learn from their results more than a billion times per day.  Data Learning Loops are powerful sources of competitive advantage akin to natural monopolies for those who achieve greatest scale.

Don’t Just DescribePredict and Prescribe   Some may ask whether Data Driven Solutions are just a new name for Business Intelligence.  I don’t think so.  Analytics packages mostly describe what is happening by sorting and filtering your data to show sums and averages and trend lines in tabular or graphical format.  Data Driven Solutions go much further by using the data to make predictions and even prescribe or execute actions.  A good example is the retail industry and Point of Sale results.  POS data is the basket by basket, sku by sku, store by store sales results that are collected across hundreds of thousands of retail outlets daily.  Nielsen has been compiling this data for decades and batch processes data sets for retailers and their vendors to study on a monthly basis.  How those vendors and retailers derive value from the data is up to them.  Retail Solutions* is a data driven solutions company which also gets POS data from retailers and shares it with vendors, but on a near real-time or daily basis.  More important than freshness of the data, however, is that Retail Solutions offers predictions and prescribes actions as solutions to business problems.  RSI doesn’t just create reports, they predict when you will be out of stock on a given SKU and sends mobile alerts to shop clerks, distributors and store managers to make sure the shelves stay full.  This is one of over ten predictive and prescriptive applications they provide in order to maximize return on investment for their customers.  Pretty charts are not enough.

Data is not the point, Focus on Solutions   Lots of companies market themselves as “Big Data” companies, but unless you are selling to IT Departments whose problems actually include managing lots of data, most business customers don’t really care about data.  They care about solving business problems.  Athenahealth* helps doctors get paid faster.  Turns out cashflow is really important to doctors and as a result Athena has grown very quickly and is now one of the largest SaaS companies.  Doctors don’t care how Athena actually does what they do, which happens to involve statistical analysis of massive amounts of insurance claims data and heavy use of learning loops.  The team at Athena deeply understands healthcare and doctors and so astutely markets themselves as a complete solution to real problems, and resist pounding their chest about how smart they are at Big Data.  In fact, the word “data” does not appear even once on their homepage.  Smart move.

Horizontal Strategy: Solve New Problems in New Ways   Providing applications for horizontal business functions like sales, finance, or human resources that function similarly across many industries represents very large opportunities because the market sizes are huge.  As a result there are powerful SaaS incumbents, such as Salesforce.com, Netsuite, and Workday, in each of these functional domains.  As you would expect, these players are starting to add data driven application intelligence to their offerings.  Fortunately for startups the challenges businesses face are constantly changing thus creating opportunities to be the first to solve new problems with a new approach.  In the realm of marketing, for example, “Content Marketing” is the hottest new trend and is the digital marketing approach seeing the greatest increase in budget allocation.  Yet marketers are highly confused as to what content to produce, how to produce it, where and how to distribute it, and especially how to measure ROI.  Captora is a young startup that has jumped on this new problem with data and domain expertise and is seeing rapid growth and using their head start to establish a beachhead before direct competition comes at them.  Knowing the experience of the team they won’t be resting on the side of the road but rather racing ahead to broaden their solution in synch with new challenges facing modern marketers.

Vertical Strategy: Feed the Starving   Providing deep solutions in specific industry verticals like healthcare, entertainment or education can be a huge opportunity.  This is especially true in industries where data has largely been non-existent or hard to access as has been the case in the three industries I just mentioned.  If a Data Driven Solution can access, interpret, or create new data and use it to solve a big problem the market reaction can be like a starving person being offering a hot meal.    Castlight Health*, for example, solves the problem that in healthcare it is generally impossible to know what a given service (an office visit or a test for example) will cost until after you’ve consumed the service and you receive your bill 30 days later.  It turns out that the variance in pricing for even a commodity service like an MRI test can be 5 to 10x in a given 5 mile radius.  If one could know the price difference ahead of time they can consume intelligently–as we do in most other shopping situations.    Large employers, who tend to be self-insured, really like the idea of helping their employees spend less on healthcare as those savings drop straight to the bottom line, and as a result some of the largest employers in America have adopted Castlights’ solution.  Customers like CVS Caremark, Microsoft, and Wal-Mart don’t really care about the big data blahdy blah that Castlight uses to come up with their solution, they just know they are starving for ways to lower their employee health care costs and Castlight has an effective solution.

Vertical Strategy   While some industries are just getting their first taste of Big Data, others have been sophisticated handlers and miners of Big Data for a long time, such as the investment industry, airlines, and eCommerce.  In those fields a small incremental advantage afforded by a data driven vertical solution can be extremely valuable.  DataMinr* is a company that transforms the full Twitter stream of public tweets using sophisticated math to discern important news events amid all the noisy babble as quickly as possible ahead of the media.  Investment hedge funds will pay handsomely for incremental advantage and getting a jump on news that might move the market or a particular stock is something they are eager to buy even amidst all their number crunching sophistication and home grown solutions.  On April 23, 2013 when the stock market “Flash Crash” occurred based on a rumor that the Whitehouse was under attack, Dataminr’s algorithms figured out the attack was a hoax a full two minutes before other new outlets and their clients were able to act on the news ahead of the market’s rapid recovery from the severe dip the rumor had caused.  It turns out that news agencies like CNN, which typically rely on human reporters and shoe leather to beak news, have also turned to Dataminr as a solution to their problem.  Dataminr thus serves both a very sophisticated big data segment, investment funds, and an industry at the opposite end of the data automation curve, the news industry, with a solution that simply could not have existed until very recently.

Consumer solutions can be driven by data too  Using Uber is a magical experience.  Push a button on your phone and a car appears within an instant to take you where you want to go—no hailing, no reservations, no need to reach into your pocket for payment, and remarkably little waiting for your ride to arrive.  If Uber simply sent messages to available drivers about customers needing rides the system might still be good but customers would have longer wait times, which wouldn’t be as magical.  Instead, Uber uses statistical analysis on data coming from their drivers and riders to predict where demand will be highest and recommends that drivers congregate there to be ready for ride requests.  Nowhere in their marketing does Uber talk about data or “quantifying” your ride patterns—consumers don’t need to know how the magic happens as long as their ride shows up quickly.  Similarly, Better Finance* makes secured loans to consumers with low or no credit so they can buy smartphones and other high-value items.  Better Finance can do this at rates far less than payday lenders because of their data driven underwriting and feedback loops coming from high loan volumes and thus their underwriting algorithms constantly improve—to the benefit of Better Finance and their customers.  Opportunities to create consumer solutions enabled by big data are everywhere…just don’t mention the word data.

Traditional SaaS and on-premise software will be around for a long time in the future and these vendors will add more and more data intelligence to their offerings.  They will be joined however, and possibly threatened by, a new generation of nimble and innovative next generation SaaS companies that will combine data and domain expertise to add massive business value to their customers.

I look forward to meeting and helping as many of those companies as possible.

*Venrock is an investor in these companies.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Net Neutrality

By Richard Kerby

Imagine this:

You sit down at your laptop one day and go to your favorite video streaming website. But instead of getting clips of hilarious cats, you get a message telling you that you have exceeded your streaming allowance for the month, unless you would you like to purchase some extra streaming credits.

This is a scenario that we were promised would never arise by the opponents of net neutrality, and yet it’s one of the first things that happened when the US legislature failed to protect the concept. Immediately, a legal wrangle opened between Netflix and Comcast. The Internet service provider attempted to negotiate rates with the movie streaming service due to the high volume of traffic it was being asked to handle. 

Within a free market, Comcast’s position makes sense. There is a demand, they are the supplier, and they should be able to price their service accordingly. And Netflix can’t play the victim in this scenario; they are on the brink of becoming one of the largest media companies in the world, so surely they should contribute to the infrastructure that supports their business model. Right? Netflix decided to give into this notion, as they struck a deal in February that put an end to throttling of traffic generated by their site. 

The argument of ISPs (and the companies behind them that provide infrastructure) is that they need to invest in order to provide the best possible service to their customers, and that certain websites make this more difficult. Therefore, in a free market, they should be able to charge those larger sites, or manage traffic accordingly. 

But the Internet is more than Netflix, Facebook, Google and Amazon. The open web has operated on a free and equal access ever since Tim Berners-Lee developed HTML. It has fostered the open atmosphere that allowed bloggers and startups to thrive. The internet is a place where a good idea done well can take on the giants of any field. The end of net neutrality is not Goliath beating David; it is a sign outside saying “No Davids”.

An image created by Reddit user Quink displays in stark detail how the Internet might look for users in the future. It may seem far-fetched but it is actually based on common advertisements for cable packages. This is a model that already exists and there is no reason that it cannot be applied to Internet services. 

Consider how this new Internet looks for you, if you run a business website or tech startup. Where do you fit in? Do users get free access to the service you provide? Or do they have to pay extra just for a pageload? What about personal sites and non-profits? Where there is competition, what happens when one company pays ISPs for preferential treatment?

The unknown elements of a post-neutral web are scary. Nobody can really predict how it will look. All we know for sure is that the level playing field of the early Internet will be gone, and this is as much a concern economically as it is politically. Business thrives on fair competition and a closed Internet is inherently unfair. 

It’s not too late though. Just recently, the European Union have enshrined net neutrality into law. Lawmakers in the US may yet see the advantages of an open web and struggle to protect it. Not only does net neutrality strengthen constitutional ideals such as freedom of speech and freedom of assembly, but it’s good for business and essential for innovation. Let’s hope that our government make the right decision, and protect the free and open internet that we have come to cherish.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

ESPN vs Cable

By Richard Kerby

In this economy, the cost of cable television is more of a pain than a benefit. With the introduction of lower cost alternatives such as Netflix and Hulu, more and more people are deciding to cut the cord.  However, there is one problem, one obstacle — where will sports fans go? As it stands, the only way to “legally” watch real-time sporting news and events is to sign up for a cable TV service. This is because the 1992 Cable Act requires that consumers sign up for regular cable service before they can enjoy the benefit of having premium channels like HBO or ESPN.  However, consumers who want cable (and aren’t that interested in sports) are being forced to pay tax on ESPN — and they aren’t happy about it. 

The Problem that Just Won’t Go Away…

Sports lovers want ESPN, others don’t, but everyone’s paying for it. As it stands, cable viewers are up in arms about having to pay a tax on the channel whether they want it or not. In fact, Bloomberg Businessweek stated the ESPN charges cable operators $4.69 per month, although I’ve seen much higher rates from other publications) for it. That charge trickles down to the viewer. The bad news is that this price is predicted to grow as ESPN announced its plan to broadcast Monday Night Football for the next eight years. As of now, it seems that the ESPN problem just won’t go away. 

Or Can It?

So you want to be a cord-cutter, but you love your sports. This is a dilemma that many sports fans have. Not to mention the problem of having to pay for basic cable just to get ESPN. So what could solve this revolving issue? A subscription only option perhaps? If cable TV doesn’t impress you, but you want to watch your Monday or Thursday night NFL games whenever they’re on, this may serve to be the best way forward. This option would provide an outlet that allows you to just watch (and pay for) what you really want — ESPN. The upside to this solution is that only fans who want the subscription will pay, and people who aren’t that interested in sports will no longer have to. Although ESPN has no interest in doing this for the time being, many cable providers are pondering it over. When asked about this very prospect in an interview in Bloomberg, Suddenlink Communication’s CEO, Jerald L. Kent, said the following:

“If I could offer high-cost channels like ESPN as stand-alone channels, à la carte, I’d do it.”

Another option that could work to keep you off of the tube and into your smartbox is a Go option. 

The good news is that subscribers can still enjoy the flexibility of ESPNGo. As of now, ESPNGo can only be viewed if a person has subscribed to basic cable, but a standalone option could change that. ESPN-only subscribers could log into their cable account on the ESPNGo app and enjoy any game, any time, any where. 

What an ESPN Only Option Means for Everyone

For one, people who don’t want it, will no longer have to pay for it. So, ideally a subscription-only option will save cable subscribers at least $5 a month. On top of that, those who only want ESPN won’t be forced to pay for channels they don’t want, which could range from $40 to $100 a month! A downside to this option is that some cable companies enjoy the inflated rates of bundling channels because after all, that’s how they keep the revenue flowing. The other downside is that they could potentially charge consumers more per month to “unbundle” as they try to deter viewers from cord-cutting. 

All in all, a standalone ESPN option is a feasible solution to everyone involved. Cable companies can retain sports fan, who account for a significant percentage of their earned revenue, and sports-fans can take their games on the go. Will cable companies catch on? We can only wait and see.  I for one would be willing to pay a large portion of my current cable bill to have a sports only subscription.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

The Price of Music

By David Pakman

Will the recorded music industry ever grow again? Since 1999, the industry has been in rapid decline as CDs became unbundled into downloaded singles. The digital download market never came close to the size of the physical music market. Now we are in the midst of another format transition, this time from downloaded singles to streaming. The question many people ask, like the thoughtful Marc Geiger, is how big will the streaming market be? I think the answer lies not in consumers’ appetite for streaming songs but in the price services charge consumers for streaming.

Recorded Music Industry Sales

At the 1999 peak of the recorded music market about $40 billion of recorded music was sold. How much did the average consumer spend per year on recorded music? Hundreds of dollars? Nope. According to IFPI at the time, across the total 18-and-over population (both across many countries or individually within one), the average amount spent came to $28 per consumer. But that includes people who did not buy any music that year. If we look at just the consumers who bought music, they spent $64 on average that year. And that was at a time when one had to buy a bundle of 12 songs in the form of a CD in order to get access to just one or two. What has happened since?

Once the bundle broke, the average spending per consumer decreased. This is predictable, since bundles artificially raise the amount of total dollars a consumer spends. The chart below shows the average spending per capita in various countries according to IFPI (note UK Pounds):

ifpi

Another study by NPD Group in 2011 found similar spending, about $55 per music buyer per year on all forms of recorded music (they note that this spending is slightly higher among P2P music service users.)

npd2

But the one retailer on the planet who would really know what consumer are willing to spend on recorded digital music today is Apple. They are the largest music retailer in the world. Their data is very consistent, about $12 per iTunes account per quarter is spent on music, or about $48 per year. Note that this figure declines year by year as iTunes users are confronted with many more choices on which to spend their disposable income like apps and videos. Also note that total disposable spending, on average, is decreasing per account as iTunes gets bigger and bigger. As a service becomes truly mass market, it reaches fewer and fewer consumers willing to spend as much as previous consumers.

apple

So, the data tells us that consumers are willing to spend somewhere around $45 – $65 per year on music and that the larger a service gets, the lower in that range the number becomes. And these numbers have remained consistent regardless of music format, from CD to download.

Curiously, the on-demand subscription music services like Spotify, Deezer, Rdio and Beats Music are all priced the same at more than twice consumer spending on music. They largely land at $120 per year (although Beats has a family member option for AT&T users at $15 per month.) This is because the three major record labels, as part of their music licenses, have mandated a minimum price these services must charge. While it may seem strange that suppliers can dictate to retailers the price they must charge end users for their service, this is common practice in digital music. The services are not able to charge a price they believe will result in maximum adoption by consumers. The data shows that $120 per year is far beyond what the overwhelming majority of consumers will pay for music and instead shows that a price closer to $48 per year is likely much closer to a sweet spot to attract a large number of subscribers.

For this reason, I believe the market size for these services is limited to a subset of music buyers, which in turn is a subset of the population. This means that there will be fewer subscribers to these services than there are purchasers of digital downloads unless one of two things happen:

(a) consumers decide to spend more than two times their historical spend on recorded music or

(b) major record labels allow the price of subscription music services to fall to $3 – $4 per month

I think the former is highly unlikely given the overwhelming number of choices competing for consumers’ disposable income combined with the amount of free music available from YouTube, VEVO, Pandora and many others. The data shows consumer spending per category decreases in the face of many disparate entertainment choices. The latter is the big question. My experience with the major labels when I was CEO of eMusic was that they largely did not believe that music was an elastic good. They were unwilling to lower unit economics, especially for hit music, to see if more people would buy. Our experience at eMusic taught us that music *is* in fact elastic and that lower prices lead to increased sales. If the major labels want to see the recorded music business grow again, I believe the price of music must fall.

 

 

 

 

 

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Doctors Without State Borders: Practicing Across State Lines

By bkocher2013

This article was first published in the HealthAffairs Blog.

Note: In addition to Robert Kocher, this post is authored, by Topher Spiro, Vice President, Health Policy, Center for American Progress ; Emily Oshima Lee, Policy Analyst, Center for American Progress; Gabriel Scheffler, Yale Law School student and former Ford Foundation Law Fellow at the Center for American Progress with the Health Policy Team; Stephen Shortell, Blue Cross of California Distinguished Professor of Health Policy and Management and Professor of Organization Behavior at the School of Public Health and Haas School of Business at the University of California-Berkeley; David Cutler, Otto Eckstein Professor of Applied Economics in the Faculty of Arts and Sciences at Harvard University; and Ezekiel Emanuel, senior fellow at the Center for American Progress and Vice Provost for Global Initiatives and chair of the Department of Medical Ethics and Health Policy at the University of Pennsylvania.

In the United States, a tangled web of federal and state regulations controls physician licensing.  Although federal standards govern medical training and testing, each state has its own licensing board, and doctors must procure a license for every state in which they practice medicine (with some limited exceptions for physicians from bordering states, for consultations, and during emergencies).

This bifurcated system makes it difficult for physicians to care for patients in other states, and in particular impedes the practice of telemedicine. The status quo creates excessive administrative burdens and like contributes to worse health outcomes, higher costs, and reduced access to health care.

We believe that, short of the federal government implementing a single national licensing scheme, states should adopt mutual recognition agreements in which they honor each other’s physician licenses.  To encourage states to adopt such a system, we suggest that the federal Center for Medicare and Medicaid Innovation (CMMI) create an Innovation Model to pilot the use of telemedicine to provide access to underserved communities by offering funding to states that sign mutual recognition agreements.

The Current System And Its Drawbacks

State licensure of physicians has been widespread in the United States since the late nineteenth century.  Licensure laws were ostensibly enacted to protect the public from medical incompetence and to control the unrestrained entry into the practice of medicine that existed during the Civil War.  However, it no longer makes sense to require a separate medical license for each state.  Today, medical standards are evidence-based, and guidelines for medical training are set nationally through the Accreditation Council for Graduate Medical Education, the Centers for Medicare and Medicaid Services’ Graduate Medical Education standards, and the Liaison Committee on Medical Education.  All U.S. physicians must pass either the United States Medical Licensure Examinations or the Comprehensive Osteopathic Medical Licensing Examination.

Although the basic standards for initial physician licensure are uniform across states, states impose a patchwork of requirements for acquiring and maintaining licenses. These requirements are varied and burdensome and deter doctors from obtaining the licenses required to practice across state lines.  For example, in all states, applicants must show proof of graduation from an accredited medical school and completion at least one year of a residency program, provide information about malpractice suits, and pay a fee to the state for initial licensure (usually several hundred dollars) and for license renewal (which in some states must be done annually).

In addition, some states require that applicants undergo further testing, complete specific course work, submit to a criminal background check, participate in a face-to-face interview, or provide proof of participation in other training programs or a log of continuing medical education courses.  Once applicants have fulfilled the initial license requirements, state agencies can take several months to process their applications.

Not only does this system impose direct costs on physicians who must decipher and comply with multiple states’ licensure requirements, but also it creates substantial indirect costs for both physicians and patients by preventing some physicians from practicing in those locations where they would be most productive and where the need for providers is greatest.  For instance, specialist shortages in rural areas are endemic, and patients must often travel long distances and endure lengthy waits in order to be seen by a doctor.

During public health emergencies, such shortages, in conjunction with state licensure requirements, can have especially harmful consequences.  As of 2008, 18 states did not permit exemption from licensure or expedited licensure for volunteer physicians during disasters.  In these states, any out-of-state private practitioners who render voluntary aid must in effect practice medicine without a license, potentially placing themselves at risk for civil and/or criminal penalties.

The impact on telemedicine.  State licensure has had a marked effect on telemedicine in particular, effectively stifling its growth as an industry.  For decades, telemedicine has been touted as a potentially groundbreaking innovation which could benefit providers (lowering administrative costs, reducing barriers to relocating), patients (lowering the cost of care, increasing access, improving health outcomes), and payers (exerting downward price pressure on providers).  While the extent of these benefits is disputed, telemedicine has had success in several areas where it has been promoted.

A Better Path Forward

For years, various organizations have advanced proposals for relaxing the regulation of telemedicine and making it easier for physicians to practice across state borders.  For example, the Federation of State Medical Boards (FSMB) has endorsed and taken steps toward implementing a system of “expedited endorsement,” which offers qualifying doctors a simpler and more standardized licensure application process, but which still requires doctors to obtain a separate license for each state.

The Center for American Progress recommends that, short of the federal government implementing a single national licensing scheme, states should go further by adopting mutual recognition agreements in which they honor each other’s physician licenses (as they now do, for example, with driver’s licenses). Mutual recognition has already been adopted in Europe and Australia and has been successfully utilized by the Veterans Administration, the U.S. military, and the Public Health Service.  In addition, twenty-four states have signed on to a similar agreement for registered nurses and licensed practical/vocational nurses, called the Nurse Licensure Compact.

To spur action and help defray the costs associated with implementation, the federal government should encourage states to adopt mutual recognition agreements for physicians.  For instance, as noted above, the Center for Medicare and Medicaid Innovation (CMMI) could create an Innovation Model to pilot the use of telemedicine to provide access to underserved communities by offering funding to states that sign mutual recognition agreements. Because similarly complex and burdensome licensing systems also deter advanced practice registered nurses (APRNs) from providing needed health services across state lines, CMMI should consider including incentives in the innovation model for states that include APRNs in their mutual recognition agreements.

Proponents of the current system may object that adopting mutual recognition would compromise patient safety or reduce the revenues that states derive from licensure fees.  Yet because standards for physician treatment, training, and testing already apply nationwide, requiring physicians to obtain separate licenses for each state in which they practice confers little additional protection on patients.  Mutual recognition could actually be designed in such a way as to raise overall standards, for example by requiring that participating states conduct physician background checks.  Similarly, states could offset potential lost revenue by increasing fees for multi-state licenses.

The reality is that state medical licensure is a vestigial system that imposes significant costs on society without furnishing any kind of commensurate benefit.  We can and should do more to address this problem.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

A Management Tool I Learned While Skiing

By brianascher

The team members all seemed this happy

The other weekend while enjoying some rare snow this season, in Utah, I had the chance to listen to Bob Wheaton the President of Deer Valley Resort Company give a talk about his management techniques.  Bob started his career at Deer Valley as a ski instructor in 1981 and worked his way up through a variety of positions.  He came across as a humble, straight shooting leader, and many of the techniques he mentioned were what you would expect from a modern business leader.  He makes sure to hit the slopes daily to ask customers and employees how things are going.  He has weekly stand-up meetings with his senior executive direct reports to synch up on operational issues.  He sends regular broadcasts to all of Deer Valley Resort Co.’s roughly 2,800 employees and he routinely holds open office hours.  One tool, however, struck me as relatively unique and powerful even though it is quite simple.  It is a weekly meeting Bob calls the Managers Meeting.

This meeting is for all of his direct reports’ direct reports, about 60 managers in all.  Interestingly, Bob’s own direct reports are not there, so the middle managers are free from having their own bosses in the room.  This serves to remove inhibitions about upsetting or upstaging your supervisor.  The minutes of these meetings, however, are carefully transcribed and distributed to ALL company employees so the senior leaders are not in the dark or suspicious about what occurred in the meeting.  The meeting is also large enough that it would be inappropriate and self-destructive  to air personal grievances about one’s boss.  It does, however, give middle managers a chance to be heard by the President in their own voice on a routine basis, and hear directly from the top rather than always through the filter of their supervisor.  The fact that the meeting is held weekly means that issues get dealt with promptly and the frequency keeps Bob in touch with operational details he otherwise might not be exposed to.  The weekly cadence means they get past the high level and into tangible and actionable topics.  It struck me as an elegantly balanced yin-yang leadership method that is both effective and efficient, and would probably work in many other industries.  I can say that the level of professionalism and smiling attitude of the Dear Valley team feels palpably different than most other resorts, and I suspect Bob’s leadership, and this particular tool, play a big part in that.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

An Emerging New Model for TV? Crunchyroll.

By David Pakman

As the viewer trend data make clear, legacy TV is undergoing a dramatic transformation, led by the many alternative ways of watching video. Cable subs are in decline, network TV viewership has tanked, and now even cable TV viewership is eroding. We frequently discuss the new streaming providers (YouTube, Netflix, Amazon, Hulu) and the on-demand show/movie retailers (iTunes, Amazon, Vudu), but a new model is emerging and worth discussing — the over-the-top (OTT) TV network. Our recently-exited investment in Crunchyroll provides a prime example.

Crunchyroll is the largest provider of Japanese anime online. They license scores of hit and long tail anime shows from Japanese media companies for streaming throughout the world ex-Japan. They offer a free ad-supported viewing option and attract millions of unique monthly viewers. They also offer a paid commercial-free offering at seven dollars per month which makes available a deeper selection of shows. They are available on the web for PC streaming, and have an app available on every mobile and connected TV platform available (iOS, Android, Roku, AppleTV, PS3, Xbox, etc.).

Crunchyroll has amassed hundreds of thousands of paying subscribers and is profitable with net margins many internet and legacy media companies would envy.

While they don’t benefit from the incredibly rich we-will-pay-you-a-fee-even-if-no-one-watches-your-network affiliate fee model of legacy cable TV, they enjoy a more accountable dual advertising/consumer subscription model. While most of us would consider this content niche, their total active actual viewers are considerably larger than most cable networks on your cable grid. Perhaps most impressively, like most technology companies, they are highly efficient, employing fewer than fifty employees.

This model benefits from many of the advantages of the web. An embedding/link-sharing culture helps Crunchy, as everything viewable can be shared and discussed throughout the web. The product is highly mobile and feeds our preference for snackable media consumption on phones and tablets. Non-subscribers get easy access and a thorough chance to experience the content without paying. And the team is staffed by fantastic technologists who rapidly adopt and optimize the service for every new platform that emerges. The team has already started expanding their successful model to new content verticals.

Their success, I think, points the way for niche programmers to deliver great video services directly accountable to their viewers and advertisers alike, and not polluted by the MVPD indirect affiliate fee model nor the antiquated Nielsen people viewer/sweeps model.

For these reasons, Peter Chernin’s The Chernin Group is the new owner of this impressive company and team. I look forward to watching the continued success of Kun, Brandon, James, Brady and the whole team. Without much fanfare, they have pioneered a way forward for much of the video programming world. We are honored to have been investors since 2007 and to have watched you succeed.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Will the Internet Unbundle Higher Education Too?

By David Pakman

I was fortunate enough to be asked to deliver the keynote address at this year’s Sustainable Scholarship Conference, put on by ITHAKA. Here, I attempt to review how the internet has disrupted bundled industries and consider the question of whether it will unbundle higher education too.

ITHAKA is a not-for-profit  that helps the academic community use digital technologies to preserve the scholarly record and to advance research and teaching in sustainable ways. They run the popular JSTOR service, a growing digital library of more than 2,000 academic journals, nearly 20,000 books, and two million primary source objects provided to colleges, universities and scholarly communities. I serve as a trustee of ITHAKA.

My slides from the presentation are here:

Thank you to Kevin Guthrie, ITHAKA’s CEO, for the invitation to speak and for the overly-generous introduction!

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Goldilocks and the 3 SaaS Go-To-Markets Models

By brianascher

Software as a Service (SaaS) is having its moment.  Customers, entrepreneurs, and capital markets are all enamored with the SaaS model– with good reason.  For customers, software as a service can yield dramatic reductions in total cost of ownership, quicker time to value, and pricing models which let you pay for only what you need and as you go versus all up-front.  For entrepreneurs, the recurring nature of subscription pricing gives more forward revenue and cash flow visibility, enables new customer acquisition models (such as Freemium), and the single code base for all customers is significantly easier to support than custom installs on-premise or supporting multiple generations of packaged software releases (and the Operating  Systems they run on.)  Investors love the predictable revenue, high margins and high growth rates.  This love affair with the SaaS model is likely to continue for a very long time. The vast majority of business software is still custom and/or on-premise license based, so there is more than a decade of disruption and growth ahead.

When we dive one fathom deeper into the SaaS model, however, we quickly discover that there is not one single model but at least three very distinct Go-To-Market archetypes.  At one end of the spectrum are the high-volume, low priced offerings such as Dropbox, Evernote, and Cloudflare that often deploy Freemium models, providing value to millions of individual users at no charge and converting some small percentage of them to premium paid accounts.    Workgroup collaboration and social/viral features are often built in to these products to help turbo-charge organic growth and online acquisition characterized by self-service signup and setup.  There are many entrepreneurs and investors who believe the whole point of SaaS is to get away from expensive direct selling in favor of these “self-service” models.  As an example, I was recently asked by an entrepreneur if I was in the “pro-sales or anti-sales camp.”  I am pretty sure they were referring to the need for salespeople, not sales themselves.  For the record, I like sales very much.

At the opposite end of the spectrum are sophisticated enterprise offerings such as Workday, Veeva and Castlight Health that are used by large enterprises and can justify pricing of millions of dollar per year.  There solutions are usually sold by experienced field sales teams, skilled in solution selling and navigating long and complex sales cycles.  These products are feature rich in terms of end-user capabilities but also in terms of security, administration and ability to integrate with legacy systems.

In the middle are solutions that usually charge tens of thousands of dollars to low hundreds of thousands per year and are sold largely over the phone by an inside sales team and can be reasonably configurable.  Customers may be medium sized companies or departments or business units of larger companies.  Examples of this model are Salesforce, Netsuite, Hubspot, and Smartling.

So which of these three models are best?  Is there one “just right” answer as there was for Goldilocks?  Or do we take the Three Bears perspective that as long as you line up the size of the chair, temperature of the porridge and firmness of the bed with the needs of your target market, all three models can be equally successful.   Clearly the latter, as one can point to several highly successful billion dollar market cap SaaS providers deploying each of the three models.  The key is to line up product/market fit, sales and support, and price in a consistent and appropriate fashion.

It should be noted that it is possible to expand across models over time, such as Salesforce.com who both sells over the phone to mid-market customers and also deploys a field sales teams to sell bigger deals to large enterprises.  Another example is Box.com which can be used by individuals, small teams, and large enterprises with pricing, feature sets and support options appropriate to each tier.

But what happens when the product, Go-To-Market strategy, and price are misaligned?  Here are the most common mistakes we tend to see:

Market too small or product too narrow for Freemium: Free is a very compelling price, especially when trying to entice consumers to try something new, and this model can certainly lead to lots of users relatively quickly.  However, employing this model in too small a market or with a product that lacks broad appeal faces the problem of there not being enough “top of funnel” free users from which some single digit percentage (typically) will convert to paying users to grow a sustainable business.  In B2B markets free can be a red herring as there ought to be enough ROI (return on investment) enjoyed by customers using your product, such that they will happily pay at least some minimal monthly payment.  Those business customers that don’t see such value likely won’t remain engaged over the long term as free users anyway.  Switching to a paid-only offering, perhaps with a brief free trial period or money back guarantee, can be an accelerant to SaaS companies if they make the change early enough to avoid the messiness of taking away a free service from your early adopters.  Some interesting case studies of SaaS offerings that saw their businesses grow rapidly when they dropped Freemium can be found here and here.  Even large SaaS companies in big horizontal markets such as DocuSign and 37Signals have greatly downplayed their free versions over time, in some cases removing them from the pricing pages of their websites, though customers can still find these free options offered if you search a little.

Underpowered and underpriced for large enterprise: We sometimes see impressive Fortune 500 logos on a customer list only to discover that the price points and deployments are quite modest.  These customers were acquired via heroic in-person selling efforts by the Founders and below market price points for non-strategic use cases.  The hope is usually  that this will catalyze “land and expand” proliferation, but unfortunately oftentimes the product is not sophisticated enough to deploy enterprise-wide or the sales team is incapable of selling at a price point that can ultimately sustain field sales efforts or a product roadmap necessary to serve large enterprise accounts.   While these “lighthouse” accounts are meant to serve as references upon which future inside sales efforts can draw credibility, the fundamental problem space can sometimes be too complex for effective phone sales to customers of any sizeAria Systems is a SaaS subscription billing provider that serves large enterprises and has found that to truly handle the needs of core business units within Fortune 500 customers requires a field sales team, sophisticated product feature sets, high touch support, and price points that can sustain such service levels.  Aria has left the opposite end of the market, serving small developers with an inexpensive and simple online billing service, to competitors that are better tuned to the broad low-end of the market and cannot compete with Aria for the narrower high-end of the market.

Overbuilding for long tail markets:  The opposite mistake from that just mentioned is trying to serve long tail markets with a product too complex and expensive for widespread appeal, leaving oneself vulnerable to much simpler, cheaper, easier to use products.  This is particularly true when marketing to developers where “cheap and cheerful” is more than adequate for most applications.  Stripe and Twilio have done a nice job of providing appropriately simple developer-centric solutions at the low ends of their respective markets, payments and voice/messaging services, stealing this opportunity from incumbent providers who were too expensive, too complicated, and too hard to do business with.

Too many flavors all at once: While true that established vendors like Cornerstore OnDemand and Concur can serve the spectrum from small business up to global enterprise, generally young startups lack the resources to serve multiple audiences at once.  Those that allow themselves to be pulled thin in multiple directions find they serve no segment particularly well and have cost structures that are unsustainable.  Better to nail one of the three basic models and let the market pull you emphatically up or down market as a means of successful expansion.   When are you ready to broaden?

My advice is to wait until you are sure that you are sufficiently up the Sales Learning Curve, that you are sure you can recoup your paid sales and marketing expenses in an appropriately short timeframe (usually a year or less) given your particular customer churn rate, margin profile and price points.  Once you are happy with your Customer Acquisition Costs (CAC) Payback  period, you can respond to market signals pulling you up or down market.  Likewise, I recommend making sure that your product is optimized for easy onboarding and support of the mid-market before adding sophisticated enterprise features to go upmarket or your development team may be overwhelmed and your user experience compromised.  In general there seem to be more examples of moving up market than down market.  It is fundamentally easier to add features and sales people to serve more sophisticated needs up market than to make a product simpler and master indirect channels to go down market.   When cooking porridge you can add salt, sugar and spice, but is much harder to take them away.

It’s a great time to build, buy or invest in Software-as-a-Service.  Recognizing that there are multiple, distinct Go-To-Market models, each equally valid in the right circumstances, enables a clear-eyed and internally consistent strategy that avoids the mistakes describe above and captures the high level benefits of SaaS.

Slide1Note:  Companies in italics are Venrock portfolio companies.  

 

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Why AdTech is Back (It Never Left)

By David Pakman

One of the most valuable characteristics of venture investing is that sectors go in and out of favor. Certain sectors, no matter the investment climate, have perennial long-term value. At least, that is my view. And I hold that view strongly about the AdTech sector.

More than 60% of the enterprise value created by internet companies comes from companies whose business model is primarily the selling of ads. Since the internet is both a communications medium and a transactions platform, I believe it will always create massive value through advertising. The internet, unlike most traditional media, is inherently a performance-oriented medium and it delivers on the promise to make advertising and marketing more accountable and more efficient. Underlying the delivery of better ad performance, in a world filled with big, quantifiable data, is an ever-increasing slate of sophisticated technology operating on massive datasets in real time. If you advertise on the internet, and eventually, every brand and service in the world will, you need exposure to these technologies or you will underperform your competitors. The AdTech sector, fundamentally, is the delivery of these advanced advertising technologies to all advertisers.

In my previous posts on the evolution of online advertising, I painted a picture, like so many observers of the space, of a world where all impressions are traded on exchanges. That inevitable transition is happening at light speed now. More than 17% of display impressions on the web are traded on exchanges and the forecasts are bullish on this trend. In that world, online advertising looks much more like trading stocks than the buying of ads over lunch meetings. In 2008, we believed that significant value would be built in this exchange layer. It was this thesis that supported our Series B investment in AppNexus. That company continues its incredible run and is one of the largest global ad exchanges in the universe. We believe AppNexus will remain one of the most important companies on the internet.

While a huge amount of buying has moved to the exchanges, the level of sophistication of many advertisers taking advantage of these RTB platforms is still rudimentary. It turns out, just like outperforming the stock market year in and year out, it’s hard to do it well. The amount of data available to buyers is enormous. The number of parameters available in tuning and targeting your audience is almost limitless. And, most importantly, there are always better data scientists down the road doing a better job than you can at building proprietary targeting models. For all of these reasons, in our opinion, the second-most valuable layer in AdTech is the data-driven ad network layer. (Not to be confused with the inventory driven ad networks.) Data-driven ad networks employ either large proprietary data sets or proprietary targeting models on top of very large data sets. The sophistication of the data scientists within these companies delivers a sustainable performance advantage over their less well-equipped peers. Two examples of these companies in our portfolio are Dstillery (formerly Media6Degrees) and Bizo. Both Rocket Fuel and Criteo are two additional companies in the space. Rocket Fuel’s recent IPO fetched it a market cap of more than $1.7B and Criteo is now over $2B. Many are asking themselves, “Why?”

[As an aside, Zach Coelius, the CEO of Triggit, points out that these companies should no longer be called ad networks because they no longer amass large amounts of inventory. They are instead more accurately "algorithmic media buying" companies or "data-driven targeting" companies...not really sure, but they aren't traditional ad networks.] 

The reason these companies are so valuable is that buyers on the exchanges are dominated by performance-oriented marketers today. Their dollars seek the best performance. The data-driven ad network layer is increasingly a case of the haves and have-nots. The better you perform relative to your peers, the more ad dollars you receive. These four companies significantly out-perform their peers, and their incredible revenue growth (and enviable media margins) indicate this.

The reason these companies have bright long-term futures is that this layer is increasingly necessary, hard to replicate, and experiences tremendous network effects. In the early days of AdTech, some believed the traditional media buyers would be able to build their own technology stacks and deliver better performance and value than independent companies in the market. This has not turned out to be true. The best performance can be found elsewhere, largely within technology companies, and so that is where the dollars are flowing. This presents enormous long-term challenges for the incumbent media buyers and will continue to pressure them to flow more and more of their client’s dollars to the better performing AdTech companies.

I believe this layer will eventually see tens of billions of dollars of media buying flowing through it. Of course, the exchange layer benefits from all of this too. For these reasons, there will be additional public AdTech companies which will fetch multi-billion dollar valuations coming to market. AdTech is back. Except it never left.

 

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

InstaCommerce vs AdStagram

By Richard Kerby

According to its site, Instagram is a “fast, beautiful and fun way to share your life with friends and family.” And that’s great! But Instagram just announced that they will be introducing ads into the user’s experience, is the best model Instagram could use to generate revenue?

Introducing AdStagram

Placing ads around content is a tried and true way to generate revenue while offering a free service. Google does it, Facebook does it, mobile apps do it, and even little blogs with a unique viewership of 20 people try to do it. So, why shouldn’t Instagram do it, too?

Instagram can attract plenty of brands to spend advertising dollars. There’s little doubt about that. But, if they take that route, they will run the risk of subjecting their users to potentially unwanted ads for the sake of revenues.

With that said the medium of expression that Instagram provides lends itself well for brands to increase awareness and engagement.  Let’s take a look at H&M, they have over 1M followers, however a large number of those followers also follow hundreds of other accounts.  Therefore, there is a fair chance that a brand’s followers may miss a fair portion of their posts.  

Employing a Twitter like suggested follower or suggested list ad unit would allow brands a mechanism to pay to acquire potential customers.  They could also take a page from Facebook’s playbook and allow brands to pay for “suggested” posts, where a brand could pay to insert a photo into your feed.  The latter strategy could be seen as much more disruptive to the user’s experience.  To prepare their users for such ad units, Instagram could begin to insert photos that your friends like into your feed to make the “suggested” post feature seem less intrusive.

It’s an easy route to go, as long as Instagram is generating the traffic to make it worth their while.  That’s not the problem, since they have well over 150 million users.

The problem is that ads could potentially degrade the experience of Instagram users.  Keep in mind that these visitors are used to an ad-free Instagram experience.  Now consider that Instagram provides users with the opportunity to display works of art to a global audience.  Ads could also distract users from the aesthetic experience they crave seek from Instagram. 

Introducing InstaCommerce

There are other ways to generate revenue online, even while offering a free service.  eBay comes to mind here.  eBay is free to shop, but requires payment from sellers.  Like the advertising model, a commerce model could potentially degrade the Instagram user experience, if a user’s feed begins to be flooded with photos of folks just looking to sell a myriad of items.

This model does require a bit more creativity to make it work.  Luckily for Instagram, they have an advantage.  Real sellers are already using Instagram to showcase their work!  Up and coming fashion designers are using Instagram to reveal their work to the world.

Want to see for yourself?  Check out these fashion fellows worth following:

In order to generate revenue, Instagram would need to find the right balance between commerce and the artistry and communication that is the heart of their site.  Giving users who wish to sell items an option to add a “Purchase” button right next to the “Comment” button seems like a natural move.

Innovating Success

Instagram has already announced that ads are coming.  It sounds like Instagram will roll this out slowly and will give users the options to block ads that they do not wish to see.  It’s possible, but I’m not convinced a soft approach would generate enough revenue to protect the user’s experience.

So, maybe it’s time to take a step back.  Bridging art and commerce is a business tradition even older than hosting ads.  It’s a way for people to appreciate beautiful things, whether they can own it or not.

It’s also a way for those who can afford those beautiful things to keep the Instagram experience ad-free for the rest of us.  With a little innovation, Instagram could embrace the commerce while keeping the artistry intact.  All it takes is a little creativity and that’s what art and business have most in common.

I’m looking forward to see the monetization progression of Instagram, it has great potential to be a real revenue driver for Facebook.  What are your thoughts? 

 

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Hot Spots in Health IT

By bkocher2013

This article was first published in VentureBeat.

Health IT is hot…and it’s about time.  Investors and entrepreneurs alike are flocking to the sector that once seemed insurmountable, yet is desperately in need of imagination, creation, and disruption.

Digital health funding is up 12% in the first half of the year¹and the momentum continues. With so many opportunities for entrepreneurs to disrupt the status quo this growth is warranted, but it is not an easy space to navigate. Investors are working hard to understand the dynamics, drivers, parties and roadblocks so they can make informed investment decisions. Venrock has been investing in the space for over a decade, but understanding the combined dynamics of healthcare and technology is very new to most.

The mass influx of entrepreneurs to the space means there is a wealth of companies for venture capitalists to invest in, but not all of these companies are poised for success. At Venrock, we have identified three areas that we find particularly exciting, from an investor perspective:

  1. Improving market efficiency
  2. Improving labor productivity
  3. Substantially improving clinical outcomes or patient experience

We think there’s a real opportunity for new entrants to build great businesses tackling these challenges.  Common across each is the ability for startups to rapidly demonstrate ROI for customers and, in many cases, the ability to create a positive network effect. In addition, each capitalizes on the three factors driving change:  policy changes, demand growth, and cost pressures.

 Translating Opportunities into Businesses

 Improving market efficiency

Healthcare markets are highly inefficient, and a fundamental problem is the lack of useful data transparency.  Without knowing what care will cost and what outcomes and experiences are delivered, it is impossible for patients to act as rational consumers no matter how much cost sharing they are asked to manage. Consumers have been left in the dark.

Today, the Health Data Initiative and health IT startup pioneers have been breaking down the barriers to provide price transparency. This is crucial since:

  • Commercial prices vary for every service in every market by more than 300%
  • Price and quality and experience are rarely correlated to prices

Therefore, for almost everything, it is possible to both get higher quality and pay lower prices.  Not surprisingly, when patients are empowered with this knowledge, they move their care to higher value providers.  As more and more commercial patients gain access to this data, we expect prices to fall and value to be correlated to price as it is in other, more competitive markets.

Data and data transparency are rocket fuel for a promising ecosystem of start-ups and high growth companies, including Castlight Health, iTriage, Kyruus, as well as New York City’s flagship healthcare start-up ZocDoc.

Improving labor productivity

Shockingly, unlike the rest of the US economy, healthcare labor productivity has declined by 60 basis points each year over the last 20 years while other sectors have large gains.² This is illustrated by the fact that non-clinical labor has grown by 50% over this same period. Today, for every doctor there are 16 other healthcare workers, and more than half of these are non-clinical workers.³ This equates to about $850,000 of labor cost per doctor.  Even more surprising is that the majority of these workers are administrators.  This is exactly the opposite of what you would hope for and expect.  In other industries, the proportion of administrative labor to productive labor compresses over time with productivity gains. This is how and why prices fall and products improve in other sectors.

We think how we use labor can be reimagined everywhere in healthcare. Companies like Vocera, Awarepoint, AirStrip, and Sotera Wireless are making hospitals more efficient, and others, like athenahealth are making doctor offices more productive by taking the pain out of the reimbursement process.

Substantially improving clinical outcomes and patient experience

The current care system is not designed to achieve consistently successful clinical outcomes for the patients when the outcomes are averting complications, preventing diseases, and keeping people working, playing, and exercising as well as they were when they were young.

This is illustrated starkly by how we manage high blood pressure:

  • We diagnose only 50% of patients
  • Of those diagnosed, only 50% of them fill their prescriptions for medication
  • Of those patients, only 50% of them achieve blood pressure control
  • Of the other 50% who take their medications, more then 90% could be controlled if doctors adjusted the dosing and medications, but this rarely occurs
  • Overall, we control only about 15% of all high blood pressure patients.  And this is for a disease where we have low cost drugs that work well

Opportunities to improve clinical outcomes exist for every other chronic condition too.  Over the next decade, more lives will be saved if we focus as much effort on redesigning care delivery and patient engagement, as we do investing in new treatments.  Start-ups like Proteus and RxAnte offer interesting new approaches to improve adherence and deliver better outcomes and cost savings.  We think that new care delivery models to fully deliver on the clinical results possible with existing treatments, through better processes, shared financial risk, and product designs that warranty and guarantee outcomes.

Fortunately, new primary care models like One Medical and ChenMed / JenCare are growing fast and delivering better care, far better patient experiences, and lower costs.  The recent acquisitions of CareMore and Healthcare Partners offer promising signs that these two models will gain greater scale to help more patients get much better care. 

Bottom Line

We think healthcare is getting better, sooner in America. We see cost growth slowing, an influx of talented entrepreneurs, large incumbents demonstrating receptivity to partnering with growth companies and large employers beginning to flex their market power.  The intersection of healthcare and technology presents a tremendous opportunity for entrepreneurs and investors alike.

But… this sector is not for the faint of heart. It is complex, evolving and increasingly crowded. Fortunately for us, entrepreneurs love a challenge.

(Disclosure: Awarepoint, Castlight Health, Kyruus and Vocera are Venrock portfolio companies.)


¹ RockHealth Digital Health Mid-Year Funding Report

² Kocher, R and N. Sahni. 2011.  Rethinking Healthcare Labor.  NEJM.  365:1370-1372

³ Kocher, R.   2013.  The Downsides of Healthcare Job Growth.  HBR Bloghttp://blogs.hbr.org/2013/09/the-downside-of-health-care-job-growth/

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Our Latest Investment – Burner!

By Richard Kerby

I’m excited to announce Venrock’s investment in Burner!  Marissa Campise and I will be working closely with Greg Cohn and the Burner team along with David Frankel at Founder Collective.  

There are 300M mobile phones in the US and a couple hundred million dwindling more landlines. Roughly 34% of Americans have done away with landlines and only have a mobile number. Mobile is now the single number for your identity. 

Burner is a privacy and identity layer for your mobile device and provides consumers with disposable numbers. It’s super easier to use—you download the app and you can instantly get a new number on the fly, which can be used for texting or calling, until you want to burn or destroy the number and put it out of circulation. This idea of different electronic signatures is not new. We’ve been doing this with email. But we currently have no way to do this with phone numbers because the carriers control that. Burner puts control into the hands of consumers.

Where it’s really exciting, is thinking about this as a mobile identity and communications platform. We are hopeful that in the same way that hotmail and yahoo mail pulled the internet identity layer away from ISPs and let consumers make their own email addresses, Burner will be able to separate the last piece of the telcom stack from the carriers. By enabling true number portability and moving the voice and SMS stacks out of the carriers hands, it completes the transition of them into a dumb pipe, gives consumers more flexibility and is supportive of a longer term trend of separating identity from a single mobile number.

So if you haven’t already downloaded the app, go ahead and do so and let me know what you think!  For the official announcement from the company head here.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Square Pegs, Round Holes, and Innovation

By Matthew Nordan

I spoke a couple of weeks ago at the NextWave Greentech Investing conference (organized by my friend Rob Day at Black Coral Capital). My aim was to describe the past, present, and future of investment in energy and environmental innovation in a way that would not epically bore the crowd… So I brought props.

While I failed in my stretch goal to break something onstage, I hope I articulated the square peg/round hole challenge in this field. Here’s the video:

Feedback is welcome. I’ve also put up a version of the slides on the tools page.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

The Downside of Health Care Job Growth

By bkocher2013

This article was first published in the Harvard Business Review.

While the growth of health care costs has slowed over the past few years, lowering costs over the long term will depend on improving health care labor productivity. Over half of the $2.6 trillion spent on health care in the United States in 2010 was wages for health care workers, and labor productivity has historically worsened at a rate of 0.6% per year. Simultaneously, the individual mandate, subsidized coverage, and Medicaid expansion in the Affordable Care Act (ACA), along with an aging population, will drive up the demand for health care. Reducing the rate at which health care costs grow, and the proportion of U. S. gross domestic product and public sector budgets that are consumed by health care over the long term, therefore, will require either increasing labor productivity or substantially lowering workforce salaries. The early signs are worrisome. With health care viewed as a jobs source and jobs being added faster than demand is growing, we appear to be on a path toward more workers and lower salaries, not necessarily more productivity, unless something changes dramatically.

Using data from the Bureau of Labor Statistics (BLS) and the American Medical Association, my colleagues and I found that from 1990 to 2012, the number of workers in the U.S. health system grew by nearly 75%. Nearly 95% of this growth was in non-doctor workers, and the ratio of doctors to non-doctor workers shifted from 1:14 to 1:16. On the basis of BLS median wages, this equates to $823,000 of labor cost per doctor. Demand and supply are not growing in tandem: from 2002 to 2012, inpatient days per capita decreased by 12% while the workforce in hospitals grew by 11%. This misalignment underlies some of the productivity decline we have observed in health care. Fortunately, we anticipate demand for health care to grow in 2014, so to the extent that jobs are not added, productivity gains are possible. Unfortunately, health care as an industry continues hiring far faster than demand is growing, adding 119,000 new workers in the first half of 2013, for example, with little increase in patient volume.

So, what are all these people doing? Today, for every doctor, only 6 of the 16 non-doctor workers have clinical roles, including registered nurses, allied health professionals, aides, care coordinators, and medical assistants. Surprisingly, 10 of the 16 non-doctor workers are purely administrative and management staff, receptionists and information clerks, and office clerks. The problem with all of the non-doctor labor is that most of it is not primarily associated with delivering better patient outcomes or lowering costs. Despite all this additional labor, the most meaningful difference in quality over the past 10 years is the recent reduction in 30-day hospital readmissions from an average of 19% to 17.8%, which arguably was driven by penalties imposed by the ACA and not by organic improvements in care models. While one could interpret the expansion of non-doctor clinical labor as a source of leverage for doctors, the number of patients doctors are seeing and whose care they are managing hasn’t increased.

This trend is troubling as we enter a phase of transformation in health care. Today, more than 60% of labor is nonclinical and is fragmented across various provider organizations, payer systems, and delivery models. It is highly unlikely that we can reorganize these jobs in a way that meaningfully improves productivity. This difficulty is compounded by regulations that limit the corporate practice of medicine, Stark laws, state nurse and physician assistant scope-of-practice and licensure rules, and billing requirements that physicians physically see patients to receive full reimbursement. Reducing regulatory hurdles represents a substantial opportunity to improve productivity by reducing fragmentation of clinical labor and delegating care to lower-cost qualified providers, but the most immediate goal should be to eliminate many nonclinical jobs through standardizing and simplifying revenue-cycle processes, credentialing, supply chains, regulatory compliance, and information technology systems, which will then allow us to reengineer administrative systems.

On the clinical side, care delivery must be designed so that the 6 clinical workers per doctor substantially contribute to a patient’s care. Today, too much clinical labor is diverted from direct patient care to lower-than-license roles such as payer utilization-management roles, staffing of underutilized diagnostic centers, administrative roles, and uncoordinated care activities. In practice, little of the existing clinical labor is actually organized into patient-care teams, and few have clarity about what outcomes they are specifically working to achieve and who is responsible. Reorganizing clinical labor around direct patient care and creating unambiguous accountability for clinical outcomes together have the potential to substantially alleviate the predicted shortage of clinicians as coverage is expanded and to improve system-level productivity, outcomes, and patient experience.

Lessons can be learned from sectors such as manufacturing. Through a significant revolution, manufacturing was able to transition from direct labor to a more productive, efficient industry, and this happened over a century, from 1855 to 1975. In addition, both production and administrative labor decreased as processes were redesigned to become more reliable, error-free, and efficient. In health care, although, the optimal relationship among doctors, other clinical staff, and administrative labor is uncertain, it is certainly the case that there should not be more administrators than doctors and all other clinical labor combined. Rather, one would expect the ratio of nonproductive to productive labor to decline over time in health care as it has in all other productive sectors of the economy. We can also surmise that the improvement needed will take decades and must be sustained by economic incentives that are aligned with productivity far more strongly than they are today.

To reverse the decline in health care labor productivity, we must transform the system both on the supply and on the demand side. As Ari Hoffman and Ezekiel Emanuel argue in the Journal of the American Medical Association, reengineering is very different from implementing new technologies. For example, new innovative reimbursement models aim to reward providers for lowering health care costs on the supply side. Consider, for example, the sorts of models being tested in Arkansas (where health care providers are given a fixed budget and a set of quality measures to achieve for an entire course of care from diagnosis to recovery) and Pioneer accountable care organizations (where providers are paid a lump sum and given a set of quality goals for year of care for a patient). With these payment models, providers make more money when they invent more cost effective approaches to delivering high-quality care. Simultaneously, more transparency in price and quality data can direct patients to more productive settings, intensifying the incentive for providers to improve on the demand side.

In the interim, workers in the health system will need to worry about their wages as more jobs are added — unless care and costs are substantially reengineered in the systems in which they work. Health care practitioners should take pride in delivering consistent and excellent clinical outcomes with fewer labor hours and lower total costs, just as leaders have in other industries. Moreover, health care leaders should also focus on replicating other sectors of the economy when it comes to reducing nonproductive labor. Finally, health care leaders and practitioners should seek to remove labor that is not directly contributing to better outcomes or delivering a hard return on investment through reductions in the cost of care. It is conceivable that shared services can emerge for processes such as credentialing, compliance, and data management and that, along with ACA-mandated revenue-cycle simplification they can substantially reduce administrative labor. In a health system where costs, out of fiscal necessity, grow more slowly, it is far more desirable to reduce nonproductive administrative labor than to reduce clinician wages.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

How the Hyperloop Could Change the World

By Richard Kerby

People are bound to be skeptical when you put the word
“hype” at the front of your name. However, the reason you should care about the
Hyperloop is Elon Musk’s track record of making impossible things happen. Keep
in mind that this guy has been in business since he developed the video game
Blastar, at age 12. He turned email into a financial instrument, with PayPal;
his SpaceX built the only commercial rockets that could deliver a payload to
the International Space Station; and with Tesla Motors, Musk has even managed
to make an electric car that is miles from dorky.

4 Things You Need To Know About the Hyperloop

So, yeah. Musk is serious. Here is a very brief breakdown of his Hyperloop and why the world needs one.

  • The Hyperloop is essentially a futuristic monorail comprised of elevated tubes running from Los Angeles to San Francisco (although it wouldn’t go directly into the downtown areas of either city), with spokes to nearby population centers such as San Diego and Las Vegas.
  • Pods containing as many as 28 people would travel at up to 800 miles per hour, more than twice as fast as some planes, and could cost only $20 per person.
  • The system runs on an electric induction engine proposed by Nicola Tesla and can be powered by solar energy with zero emissions.
  • It would cost one tenth as much as the current high-speed rail proposal and revolutionize travel between major cities.

Life in Tube Land

Musk explained that there are limits to this new form of transportation, “It makes sense for things like L.A. to San Francisco, New York to D.C., New York to Boston and that sort of thing. Over 1,000 miles, the tube cost starts to become prohibitive, and you don’t want tubes every which way. You don’t want to live in Tube Land.”

One of the most obvious benefit of the Hyperloop is that it vastly expands recruiting territories. Companies would have a greater pool of candidates to choose from, as employees could commute hundreds of miles away from the office. On deeper reflection, what it really means is a merging of America’s biggest metropolitan regions.

It can become a new economic engine by creating a practical way for talent to reach vision, for investors to discover inventors and for families to heal the distances between them. As the development of commercial airlines unleashed the economic boom of the 1950s, the economic possibilities of this technology could be truly staggering. 

Is the Hyperloop a Sure Thing?

Let’s be clear, though. The Hyperloop is purely hypothetical at this point, and Musk readily declared that he won’t be available to work out the details. It would cost more than $6 billion under the best of circumstances, although I do think Jonah Peretti’s idea  of an LA to Vegas route funded by the casinos is a great one. The first working model is at least seven years away and in that time, there will be plenty of hurdles to clear even if California approves it – and that’s a long shot. On the other hand, it is not nearly as long as a rocket flight to geostationary orbit, so I wouldn’t bet against him on this one.  

 

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Energy, Human Evolution, and Neuroscience

By Matthew Nordan

There’s a dividing line down the center of energy and environmental community. On one side are Amory Lovins types who say that we should focus exclusively on deploying the technologies we already have, because no breakthroughs are needed to scale the world for 10 billion people. On the other side are the likes of Bill Gates, who describes today’s solar and wind as “cute” and says we need radical innovation to keep the world turning.

While I don’t think either extreme is helpful I’m on the Gates side of the continuum.

My point of view is informed by my work as an investor, but just as much by my initial academic training in human behavior. Human beings are wired not to achieve some specific standard of living, but to always have more – “more than those around me, more than I had yesterday” – and evidence from sociology to brain imaging speaks to the fact. I gave the rapid-fire version of this thinking in the talk below from last month’s VERGE Boston event:

Take a gander and let me know what you think. At some point I’ll get around to writing up the long-form version of this thesis, which has sat half-composed on my laptop for several years now.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Why You Can’t Short Private Company Stock

By Richard Kerby

Stock shorting is popular among public investors, however, it is not applicable in all types of company stocks. Only individuals dealing with public company stock can short shares. Private company stock cannot be shortened regardless of the number of willing stock buyers. To perfectly understand why it can’t be done, it’s vital to first comprehensively define what shorting entails

What shorting entails

To effectively generate money from shorting, you have to be good in market speculating and analysis. As an investor, you have to concentrate on the particular stocks you think are currently overpriced and will possibly depreciate later. Selling such stocks in the current market and buying them later at a lower price earns you some significant profit especially if the price margin is quite large.

You definitely need to have stock to sell it. Unfortunately, you may not have the particular ‘overpriced’ stocks. The best way to acquire the stocks is to borrow them from a share holder. You can do this by directly contacting the holder or using your broker. Some brokers usually ‘borrow’ stocks from unknowing holders and lend them to investors for shorting. Such stocks always have to be returned immediately after the shorting before the holders realize they are missing.

As an investor, you will short by selling the ‘overpriced’ stocks and waiting for the price to subsequently depreciate. When the value significantly reduces, you should buy the same number of stocks and return them to the lender. You will therefore end up pocketing the profits that the stocks have acquired. Although this is a relatively attractive business endeavor, it is considerably risky since company stock may appreciate instead of depreciating. Let’s now take a look at a numeric example.

Let’s say I own 10 shares of company ABC at $10 per share. You believe the stock price of ABC is overvalued and is going to crash sometime soon. You then come to me, and ask to borrow my ten shares of ABC and sell them at the current market price for $10. I agree to lend you my shares as long as you pay me back ten shares of ABC at some point in the future. You take the ten borrowed shares, sell them for $100 and pocket the money (10 shares x $10 per share = $100).

The following week, the price of ABC stock falls to $5 per share. You call your broker and tell him to buy 10 shares of ABC stock, at the new price of $5 per share. You pay him the $50 (10 shares x $5 per share = $50). You then return the shares of ABC that you borrowed from me.

In summary, you borrowed my shares of ABC, sold them for $100. When ABC fell to $5 per share, you repurchased those ten shares for $50 and gave them back to me. This resulted in a $50 profit for you (minus of course any trading fees).

What would have happened if you were wrong and the stock price had increased?  You would have had to buy back the shares at the new, higher price, and absorb the loss. Unlike regular investing where your losses are limited to the amount of capital you invest (e.g., if you invest $100, you cannot lose more than the $100), shorting stock has no limit to the amount you might ultimately lose. In the unlikely event the stock had shot up to $500, you would have had to purchase ten shares at $500 a share for $5,000. Taking into account the $100 you received from selling the shares earlier, you would have lost $4,900 on a $100 investment.

Public vs Private Company Shorting

Public company shorting is possible primarily because stocks can be freely sold. They can be easily sold to willing buyers and re-acquired through willing sellers. However, the transfer of private company stock, is limited due to the current restrictions on private company stock and thus cannot be freely bought or sold.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Five Lessons Learned From Bringing Cleantech to China

By Matthew Nordan

tl;dr: Experienced cleantech CEOs leverage China instead of fearing it – enlisting self-interested partners to defend IP and manage risk.

This post was co-written with George Miller, an MIT MBA student who conducted this research while interning with me. A version of it also appeared at GigaOM.

One morning it dawned on me that of the nine energy companies in our Venrock portfolio, a third are focused on China – yet none of them planned it upfront. I figured it would be good to understand how other cleantech start-ups have approached the middle kingdom, so I enlisted MIT MBA student (and fluent Mandarin speaker) George Miller to interview a representative sample and collect best practices.

George spoke confidentially with 15 venture-backed cleantech start-ups that have set up Chinese operations. Every interviewee was either a C-level executive or VP of business/corporate development; the majority were CEOs. Nine of the 15 interviewees entered China primarily to sell into the domestic market, while the balance aimed to export from Chinese manufacturing facilities. The average company is 11 years old and entered China five years ago.

China entry timeline, 15 start-ups interviewed

Most of our research findings are confidential to Venrock and the companies interviewed, but some high-level conclusions deserve a broader airing.

China strategies have been mostly improvised. At all of the companies we spoke with, China is a big, board-level deal, ranking somewhere between “an important growth market” and “our sole focus.” Yet only four firms had a specific China plan at the formation stage, and three initially didn’t plan to enter China at all.

“In the company’s initial business plan, China entry was…”

Nearly all companies partner. The most common engagement model we found was a joint venture (JV) with a Chinese enterprise, represented by eight of the 15 interviewees. Six have set up distribution agreements but not full-blown JVs. Only one has gone it alone in China, with a standalone, wholly foreign-owned entity that manufactures and sells directly.

China entry approaches

IP is the big challenge. When we asked about key challenges experienced in China, intellectual property (IP) protection topped the list. This is no surprise – tales of IP leakage in the country are legion, with cleantech’s most glaring example being the outright theft of American Superconductor’s wind turbine software. General transparency in business dealings came second, and a cluster of people-related challenges followed. In contrast, interviewees didn’t find market access difficult: We heard that with strong government support and large pools of capital, the risk appetite for capital-intensive projects is greater than in most developing countries.

Key challenges cited in China entry

JVs are the solution. Conventional wisdom says to protect IP by building moats – like splitting manufacturing steps across sites so no one person knows them all, or supplying a key “black box” component from outside China. Our interviewees employ these moat-building tactics, but they think bridge-building works best: The technique rated most effective was forming a joint venture with a large Chinese partner, incentivizing that partner with outsized ownership, and relying on its self-interest to defend the IP. Notably, every interviewee with a JV ranked this tactic the highest.

“What IP protection method was most effective in China?”

JVs address secondary problems, too. When we delved into interviewees’ secondary challenges about transparency and people, it turned out that a strong JV partner was effective in resolving them as well. The stories we heard addressed…

…conflicts of interest: “We had no idea that the largest shareholders in [a potential distributor] are also in the seats of power at [the end customer]. Only once you reach the goal line do they open the kimono.”

…internal corruption: “One of the executives we hired was marking up purchase orders and taking kickbacks. He didn’t have a bad heart, and that practice is common in China – so instead of ‘firing’ him, we ‘retired’ him.”

…training: “Because our technology is so unique, we didn’t have trouble attracting and retaining talent. The challenge was educating them on exactly what we do.”

…workforce management: “It’s not just the government that’s socialist; it’s also the labor force. Financial incentives don’t work well. We used vacation time as a key motivator.”

It’s self-evident that these challenges can be mitigated by a strong in-country partner that knows the value chain and manages lots of people.

. . . . .

Chinese joint ventures are no walk in the park. The average JV in our sample was three and a half years old, had taken longer to get going than expected, and was considered too early to call as a success or failure. Interviewees complained about long government approval processes and culture clashes along the way, and we didn’t hear any silver-bullet tactics for doing it right: The best practices were all things you’d expect, including intensive background checks of partner executives, JV agreements that maintain “face” for both start-up and partner (usually relying on profit-sharing), and experienced domestic legal representation. And clearly, you’ve got to be obsessive about picking a trustworthy partner – American Superconductor’s widely-publicized IP dispute is, in fact, with its former JV partner Sinovel.

Despite all those caveats, I drew a clear conclusion from this work. Most cleantech innovation is happening in the U.S., but most adoption will be in growth economies building new infrastructure – with China at the top of the list. Chinese incumbents like Wanxiang, Shenhua, and ENN are scouring the west for technologies to pick up. In this environment, a cleantech start-up can either play defense at the barrel of a financial gun (see A123 Systems), or play offense, entering China on its own terms and timeline. If you’re going to do the latter, be prepared to partner up.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Same Day Delivery

By Richard Kerby

Consumers have been minimizing time and cost by shopping online for awhile now.  Amazon and others have been offering an online shopping option for years. However, more people are looking into ordering perishable items such as groceries or flowers and they’re requesting same-day delivery. In this post I will explore four services that I’ve tried: Google Express Checkout, Amazon prime, Instacart and Fresh Direct to see how each works and what their pros and cons are for consumers looking to shop through them.

Amazon Prime

Amazon prime customers are able to enjoy live streaming video and streaming movies from Amazon prime’s website. Amazon prime customers are also able to enjoy other discounts on the site including discounted express delivery costs. Recently, Amazon Prime began offering discounts on same-day delivery much to their customers’ satisfaction. The good thing about Amazon, is that it doesn’t just offer same-day delivery on perishable items, it also offer same-day delivery on any item that has the prime logo near it. Most of these items are shipped and sold from Amazon and not third-party companies, although there are a few exceptions. People can enjoy same-day delivery on items such as furniture, food and gifts for as little as $3.99. Amazon same-day delivery is called Local Express Delivery.

Fresh Direct 

Fresh Direct is a well-known online grocer that delivers everything from eggs to toilet paper directly to your home or place of business. At the moment, they primarily deliver to NYC and parts of Connecticut and Pennsylvania. Since they have a 100% satisfaction guarantee, customers feel safe purchasing products from the site even if they can’t view them in the store before hand. A lot of their popularity comes from the fact that their prices are similar to those that you would find in a grocery store and sometimes less. Items are all shipped in a refrigerated truck seven days a week between the hours of 6:30 AM and 11 PM, making it very convenient for those who need their groceries anytime of the day. The great thing about fresh direct is that they also have an app that allows consumers to shop directly from their iPhone, iPad or android powered device. This was a go-to service for me when I lived in NYC.

Instacart

Instacart is well known across areas of San Francisco, Palo alto and other surrounding neighborhoods. Unfortunately they don’t deliver outside of California just yet. Instacart allows consumers to purchase items from stores such as Safeway, Costco, Whole Foods and Trader Joe’s from the convenience of their home computer, tablet or smart phone. They carry over 30,000 grocery items from the stores and they’re all available for same-day delivery within as little as one hour. The way it works is that the consumer purchases items from their online catalog, the order gets routed to a personal shopper for collection and then it’s delivered directly from the grocery store to your home. Depending on where your home or office is located, these items could be to you in as little as 60 minutes. The prices for their deliveries vary, but most customers choose to have their groceries delivered in under two hours (the majority of these orders there around $35) making the the delivery just around $3.99. They deliver on holidays and weekends in their normal delivery hours are between 10 AM and 9 PM seven days a week.  

Instacart has been my favorite service in SF.

Google Shopping Express 

Google Shopping Express, not to be confused with Google Express a.k.a. Google Wallet, is a delivery service that allows consumers to purchase from big retailers such as Walgreens, Office Depot, Staples and Target for quick same-day delivery.  At the moment, Google Shopping Express only delivers in the San Francisco Bay area, but they are looking to expand into other cities and states. The Google model is identical to Instacart’s, however the big downside her are the delivery times.  While Instacart has one hour delivery windows, Google uses 3 hour delivery windows which is pretty burdensome for those of us who can’t block off 3 hours of our day to stay in one location.  I am sure Google will improve on their delivery items since the service is not yet publicly launched.  

There a plethora of other services that can also provide a similar service – TaskRabbit, Exec, etc.  I love products and services that make me more efficient.

What are everyone else’s thoughts?

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Self Driving Cars!

By Richard Kerby

There is a new type of connected technology that’s making eyebrows raise all around the world – self driving cars. Ok ok, so everyone is actually talking about Google Glass, but I have a greater fascination with the way in which the self-driving car can change the lives of all of us.

So, What’s the Big Whoop?

Some consumers and companies are worried about the programming going rouge and causing the passengers to be in danger, and others wonder if this will be a solution to the global warming crisis that seems to be worsening each year . Below are some reasons of why I think this innovation will help in more ways than one.

Minimizing loss of life. Everyday dozens of drivers are killed in automobile accidents. Driverless cars are able to detect potential hazards and work as the eyes and ears for the driver. According to Google, their driverless cars will reduce traffic accidents by 90 percent. In addition to saving lives, these driverless cars will also cut the annual cost of traffic accidents, which is currently over $450 billion per year.

Saves money on commuting. Not only do driverless cars know the most efficient routes to get to your destination, but they’re also estimated to use less gas. Google claims that using driverless cars can save over $101 billion dollars in fuel costs. The majority of this $101 billion is spent on wasted gas from taking incorrect or less-efficient routes, which will equate to savings in your pocket and savings for the ozone.

Auto Industry Disruption Businesses are already lining up and investing in buying fleets of these driverless cars. Businesses such as Zipcar could see a fundamental change in the structure of their fleets once these driverless cars hit the road as livery/taxi services and are produced in mass numbers. The reason investors are scrambling to invest in driverless car services is because of the profit margin and the best part about it is that they won’t have to hire in a staff to drive. The car will do all the work itself so there’s no need to pay those high health insurance payments or employee salaries.

All in all, I think Google has got something something special here (and we thought Google glasses were cool). Are you ready to ghost ride the whip? When driverless cars hit the mainstream, will you buy one?

 

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Venturescape: This Year’s NVCA Annual Meeting

By David Pakman

I’m going to Venturescape, the NVCA Annual Meeting May 14 and 15 in San Francisco and you should too. I haven’t been to one in many years, but this year is different.

The NVCA is the National Venture Capital Association. It’s much more than just a trade organization, and this year’s annual meeting demonstrates that.

My friend Jason Mendelson from Foundry Group is on the NVCA Board of Directors and he is running Venturescape. That being said, if the meeting was going to suck I wouldn’t go. But it looks quite good.

I’m excited about the agenda, as this is the best lineup I’ve seen at one of these events. Included in the mix are:

  • Dick Costolo, Twitter CEO
  • General Colin Powell
  • Ginni Rometty, IBM CEO
  • Anne Wojcicki, 23andMe CEO

There is also the world’s largest VC Office Hours. And for the first time, “fun” is part of the meeting in the form of NVCA Live! — a great concert featuring Pat Monahan from Train and Legitimate Front, a band in which I’m staring as the drummer and main groove man.

If you are a VC, I hope to see you there. If you are an entrepreneur, ask your VC funders for tickets to NVCA Live!, as that is open to everyone, although tickets are only purchasable by NVCA members.

If you are coming, especially to NVCA Live!, let me know. See you there.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

The State of Cleantech Venture Capital: What Lies Ahead

By Matthew Nordan

Footsteps end on path through the woods(A version of this post also appeared at GigaOM.)

tl;dr: Cleantech VC is receding because of poor short-term performance – no surprise in a post-bubble field with outsized time and money requirements. The category is about to go on a walk in the woods, where innovators will blaze a new trail.

In late 2011 I decided to write up an internal analysis I’d done at Venrock about the state of cleantech venture capital and make it available broadly. I’m a fact-based, research-driven guy, so I tried to shine the light of data on myths and realities in the field. My macro conclusion was that while it was really early, investment returns to date were on par with VC overall.

Much has changed since then. With 2012 numbers done and dusted, I figure it’s time to revisit this topic – again, under the light of data. I’ll frame this analysis with the questions I’ve gotten from VCs and entrepreneurs who’ve asked me for an update.

What’s happening to cleantech venture capital?

It’s receding.

Cleantech VC funding by quarter, 2011-2012 (US$ millions)

  • Investment fell 30% in 2012 – and even further at the early stage. The Moneytree survey numbers had cleantech VC investment falling from $4.6 billion in 2011 to $3.3 billion in 2012 – a 28% drop. Further, they showed first-time funding of new start-ups plummeting 58% to just $216 million, and shrinking as the year progressed: By Q4, first-time funding was just 4% of capital invested.
  • Limited partners are backing off. VC firms get the money they invest from limited partners (LPs) like foundations and pension funds. Last December Preqin called up 31 LPs that were invested in at least one cleantech-focused fund and asked if they planned to back any new ones in 2013. Only 22% said yes (down from 31% a year before).
  • The people are changing. Many VC firms parted ways with their cleantech teams in 2012. While February’s ARPA-E conference had a record number of attendees, venture investors were scarce – replaced by a bumper crop of corporate types.

Why is this happening?

Cleantech VC performance is substantially lagging venture capital as a whole. This wasn’t true in 2011, but things changed fast in 2012.

I arrive at this conclusion by comparing two data sets. On one hand, we have data on the interim performance of 19 cleantech-only VC funds as reported by the California Public Employees’ Retirement System (CalPERS), a big LP. On the other, we have equivalent data for the entire universe of VC funds from the National Venture Capital Association. (The data are expressed as “value to paid-in capital, net to LPs,” which means “the current value of the funds divided by the money put into them, accounting for what VCs pay themselves.”) By comparing cleantech-only fund performance with the full VC universe at the same points in time, we can see whether cleantech is doing better or worse than the asset class.

The answer is that cleantech went sideways in 2012 while VC overall did well. In September 2010, the cleantech VC funds were worth 0.90x the money paid into them while comparable VC funds overall were at 0.96x – roughly the same. Six months later the gap had widened, but both had risen in value and remained within spitting distance. By June of 2012, however (the most recent data available), the cleantech funds had declined slightly while the overall VC universe climbed to 1.23x.

Cleantech-only VC fund valuations vs. full VC asset class

This is why investment is stalling, LPs are hesitating, and cleantech VCs are thinning: Capital invested in other domains is showing a greater near-term return.

If minimal money had gone into cleantech, or if the macro environment were rosier, there might be more willingness to forge ahead. But today, fund managers assess the $25 billion worth of cleantech VC invested since 2003 against a backdrop of shale gas and climate apathy – and tighten the purse strings.

OK, but why is that happening? What’s driving weak cleantech VC performance?

Two factors. First, there have been too few exits.

Let’s consider the gold standard of VC wins – an IPO on a major exchange. When I last did this analysis, cleantech was overperforming on the IPO front: In 2009, 2010, and 2011, cleantech’s share of VC-backed IPOs exceeded its share of VC funding. (Note: One must apply an appropriate time lag to the latter – I used five years, which is informed by deal-by-deal fundraising data by cleantech start-ups).

This ended in 2012. Just as in the prior year, three cleantech IPOs took place out of about 50 VC-backed IPOs in total (6%). But cleantech’s corresponding share of VC funding rose to 10% – so cleantech was now underperforming on exits relative to capital invested, instead of overperforming.

Cleantech share of VC funding vs. share of VC-backed IPOs, 2004 to 2012

(Of course, most VC-backed companies exit through acquisition, not an IPO. But the M&A front looks no better for cleantech. When merchant bank Jane Capital counted up every acquisition of a VC-backed cleantech start-up worth more than $50 million in the last 10 years, it found just 27 of them.)

Second, the winners have disappointed post-IPO. When a start-up goes public, its VC investors rarely get to sell their shares immediately: They have to wait out a lockup period that typically lasts six months. Of the nine VC-backed cleantech start-ups that have done major-market IPOs since 2010 and have been public for more than six months, eight were trading below their IPO price at the 180-day mark.

Aftermarket performance of cleantech IPOs completed 2010-2012

In four of those cases, the 180-day share price was also lower than the price at the last venture round. That means VCs who bought shares in that round were under water when the lockup expired.

So is the pullback in cleantech VC justified?

Well, it’s certainly expected. The cleantech gold rush of the late 2000s saw hundreds of start-ups funded – many with identical propositions – that greatly exceeded the carrying capacity of their industries: For example, there’s no way that more than a handful of the 219 solar start-ups counted by Greentech Media in 2009 could possibly succeed. This dynamic isn’t unique to cleantech. The Internet VC bubble of the late 90s was the same story, albeit on a much larger scale.

But just as the boom-and-bust in dot com investment didn’t mean this whole Internet thing was a waste, the same is true for energy and environmental technologies. It’s very likely that multiple billion-dollar companies lurk among today’s cleantech VC portfolios. The question is – given the current retrenchment of capital from the field – how many of them will get the fuel to reach the finish line.

In the main, energy and environmental start-ups need outsized time, money, and risk tolerance to reach a big outcome. (That’s not true of IT-meets-energy “cleanweb” companies like Opower or Venrock-backed Nest Labs, but it holds for the deep-tech start-ups that comprise most of the category.) As our case study, let’s take First Solar, the pioneering thin-film solar maker. The company’s first instantiation was founded in 1990; it took 12 years to ship a product, was restarted in 1999, and consumed $150 million of equity investment (all Walton family money) before its 2007 IPO. But at that outcome, First Solar was worth $1.4 billion valuing the Walton stake at 8.4x. Two years later at the peak of the solar boom, it was worth 199x!

If this is what success looks like – that is, if the majority of cleantech start-ups will need more time and money to reach big outcomes compared with VC-backed companies overall – a few conclusions follow:

  • Funds focused solely on cleantech will have a longer and deeper “J-curve” of returns compared with VC as a whole. When they reach the same final return multiple, they will take longer to do so (impacting IRR). Midway through the journey, their performance will look like an “L-curve.”
  • To the extent that cleantech start-ups’ time to exit will be 10 years or more, it’s too early to call success or failure on the current crop – because most of them were founded in 2007 or later. Check back in five years.
  • Because the time frames to an outcome are longer and the amounts of capital required are greater, cleantech investment should be less spikey compared with investment in, say, Internet start-ups. And lo and behold, that’s pretty much what we see:

Internet VC bubble vs. cleantech VC bubble, time-aligned

Cleantech VC now is like Internet VC in 2001: on the downward slope of a bubble, albeit with a more gradual climb and a gentler descent. Note that Facebook was conceived in 2003 – the lowest point for Internet investing post-bust – and that in 2004, Google’s IPO kicked off the renaissance that persists today.

So is the cleantech pullback justified? The data says it’s too early to call. However, it also suggests that the time frame required to reach a conclusion will greatly stretch 10-year closed-ended funds.

(A diligent reader may point out my own numbers showing that when VC-backed cleantech start-ups have gone public, they’ve mostly done so in less than 10 years. My take is that most of these companies were rushed to public markets before they were ready – explaining the awful aftermarket performance.)

What happens now?

Cleantech innovation is about to take a walk in the woods. Justified or not, the established path of VC-backed investment is narrowing for a generation of start-ups. Some of those companies – and some of the investment managers that have backed them – will break off into the wilderness to find a new route.

In this environment, I see opportunities in:

  • Selective recaps. About 270 cleantech start-ups can be characterized as “late stage” (they’ve raised Series C rounds or later). Of those, about 150 have demonstrated proof of economics and are focused on scale-up. If capital keeps receding, there won’t be nearly enough money to fund them to exit – enabling savvy late-stage financiers to pick off the best of the bunch in recaps that reap disproportionate returns. In 2011 I thought this capital gap wouldn’t persist, because the likes of VantagePoint and Silver Lake Kraftwerk were out raising huge funds aimed at it; the failure and scale-back of those efforts leaves the opportunity open.
  • Cross-border plays. The U.S. dominates cleantech innovation, but China and other overseas nations dominate deployment. New vehicles are mobilizing to provide cleantech equity investment coupled with cross-border JV creation and operational help – including Formation8 and a stealth-mode firm I can’t reveal.
  • Strategic investment, rethought. Large corporations in industrials and energy have strategic motivations to foster cleantech start-ups: The likes of GE and General Motors want an innovation pipeline, while utilities want a stream of new equipment to rate-base. Institutions are forming to organize this activity in a merchant banking model, like Broadscale at the late stage and OnRamp Capital at the early.
  • Foreign techno-colonialism. While U.S. investors bemoan a lack of capital for cleantech, many foreign institutions are awash in it – and view American assets as being generally cheap. To U.S. start-ups, they will play a role somewhere on a continuum between savior (e.g. Japanese trading houses bankrolling cleantech start-ups to get the inside track on project financing) and reaper (e.g. Wanxiang’s A123Systems deal).
  • Philanthropic capital. The cleantech projects that would most change the world – think electrofuels, solar antennae, advanced nuclear power – are also the least likely to be funded, because they combine long time frames with extraordinary risk. There is a case to be made for impact investment in these fields using philanthropic capital as a charitable activity. A new effort called PRIME, backed by four visionary family foundations, is leading this charge.

It’s hard out there for cleantech. The woods are scary and the journey is uncertain. But pioneers are charting a new path through the thicket – blazing trails that others will follow.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

10 Rules For Disruptors In The Financial Services Industry

By brianascher

Having worked in the FinTech space many years ago, invested in the space for over a decade, and met with hundreds of talented teams in this area, I have observed the following ten traits among the most successful companies:

Rule #1: Unlock Economic Value   Most traditional financial service firms have invested heavily in branch networks that create expensive cost structures which result in higher prices to customers. Mass-marketing channels and poor customer segmentation also result in higher costs and marketing expenses which translate to higher prices. Online-only financial services can unlock significant economic value and pass this along to consumers. Lending Club offers borrowers better rates and more credit than they can get from traditional banks, while offering lenders better rates of return than they can get from savings accounts or CDs. SoFi is disrupting the world of student loans with better rates to student borrowers and superior returns to alumni lenders relative to comparable fixed income investment opportunities.

Rule #2: Champion the Consumer   Consumers are disenchanted and distrustful of existing financial institutions. Let’s take this historic opportunity to champion their interests and build brands deserving of their love. The team at Simple has envisioned a new online banking experience that puts the consumer first via transparency, simplicity and accessibility. Its blog reads like a manifesto for consumer-friendly financial service delivery. LearnVest is another company on a consumer-first mission to “empower people everywhere to take control of their money.” Its low-cost pricing model is clear and free of conflicts of interest that are rampant in the financial sector.  There is plenty of margin to be made in championing the consumer. The speed at which consumer sentiment spreads online these days creates an opportunity to become the Zappos or Virgin Airlines of financial services in relatively short order.

Rule #3: Serve The Underserved  In my last post explaining why the FinTech revolution is only just getting started, I described how the global credit crunch left whole segments of consumers and small businesses abandoned.  Some segments at the bottom of the economic ladder have never really been served by traditional FIs in the first place. Greendot was one of the pioneers of the reloadable prepaid cards bringing the convenience of card-based paying online and offline to those who lacked access to credit cards or even bank accounts. Boom Financial is providing mobile to mobile international money transfer at unprecedented low rates and ultra-convenience from the US to poorly served markets across Latin America and the Caribbean, and eventually globally.   No need for a bank account, a computer, or even a trip downtown to dodgy money transfer agent locations.

Rule #4: Remember the “Service” in Financial Service  Just because you are building an online financial service does not mean that your service is only delivered by computer servers.  When dealing with money matters many people want to speak to a live person from time to time or at least have this as an option just in case. Personal Capital delivers a high tech and high touch wealth management service via powerful financial aggregation and self-service analysis tools, but also provides live financial advisors for clients who want help in constructing and maintaining a diversified and balanced portfolio. These advisors are reachable via phone, email, or Facetime video chat.  As a rule of thumb every FinTech company should provide a toll-free phone number no more than one click from your homepage.

Rule #5: Put a Face on It  Chuck SchwabKen FisherJohn BogleRic Edelman.  These stock market titans may have very different investment styles but they knew that consumers want to see the person to whom they are entrusting their money and as a result they each plastered their face and viewpoints all over their marketing materials, websites, and prolific publications. If your startup wants consumers to entrust you with their nest eggs, you ought to be willing to show your face too. This means full bios of the management team, with pictures, and clear location for your company as well as numerous ways to be contacted. It’s also a good idea to make sure that your management team have detailed LinkedIn profiles and that a Google search for any of them will yield results that would comfort a consumer.

Rule #6: Be a Financial Institution, not a vendor  The real money in FinTech isn’t in generating leads for FIs or displaying ads for them. That can be a nice business, but the real margin is in making loans, investing assets, insuring assets, or settling transactions. In just a few years Wonga has a become a massive online lender in the UK by instantly underwriting and dynamically pricing short term loans. Financial Engines and a new crop of online investment advisors make and manage investment recommendations for their clients.  You do not need to become a chartered bank or an investment custodian as there are plenty of partners that can provide this behind the scenes, but if you can brave the regulatory complexity and develop the technology and skills to underwrite and/or advise exceptionally well, the opportunities are huge.

Rule #7: Use Technology Creatively  The incumbents have scale, brand history, brick and mortar presence, and armies of lawyers and lobbyists. If FinTech startups are going to disrupt the incumbents, you will need to work magic with your technology. How clever of Square to use the humble but ubiquitous audio port on smart phones to transmit data from their swipe dongle and for using GPS and the camera/photo album to make everyone feel like a familiar local when using Square Wallet.  MetroMile is a FinTech revolutionary disrupting the auto insurance market by offering pay per mile insurance so that low mileage drivers do not overpay and subsidize high mileage drives who tend to have more claims.  They do this via a GPS enabled device that plugs into your car’s OBD-II diagnostic port and transmits data via cellular data networks in real-time.  Start-ups playing in the Bitcoin ecosystem such as Coinbase and BitPay are certainly at the vanguard of creative use of technology and are tapping in to the mistrust of central banks and fiat currencies felt by a growing number citizens around the world who trust open technologies more than they do governments and banks.

Rule #8: Create Big Data Learning Loops  Of all the technologies that will disrupt financial services, Big Data is likely the most powerful. There has never been more data available about consumers and their money, and incumbent algorithms like Fair Isaac’s FICO scores leave most of these gold nuggets lying on the ground. Today’s technology entrepreneurs like those at BillfloatZestCash, and Billguard are bringing Google-like data processing technologies and online financial and social data to underwrite, advise and transact in a much smarter way. Once these companies reach enough scale such that their algorithms can learn and improve based on the results of their own past decisions, a very powerful network effect kicks in that makes them tough to catch by copycats who lack the scale and history.

Rule #9:  Beware the Tactical vs. Strategic Conundrum  One challenge when it comes to financial services is that the truly strategic and important financial decisions that will impact a person’s financial life in the long run, such as savings rate, investment diversification and asset allocation, tend to be activities that are infrequent or easily ignored.  Activities that are frequent and cannot be ignored, like paying the bills or filing tax returns, tend to be less strategic and have inherently less margin in them for FinTech providers. Real thought needs to go into how you can provide strategic, life changing services wrapped in an experience that enables you to stay top of mind with consumers so that you are the chosen one when such decisions get made. Likewise, if you provide a low margin but high frequency services like payments you must find a way to retain customers for long enough to pay multiples of your customer acquisition cost.

Rule #10: Make it Beautiful, Take it To Go  A medical Explanation of Benefit is possibly the only statement uglier and more obtuse than a typical financial statement.  Incumbent FI websites are not much better and over the past ten years many large FIs have heavily prioritized expansion of their branch networks over innovating and improving their online presence.  As a FinTech startups  you have the golden opportunity to redefine design and user experience around money matters and daresay make it fun for consumers to interact with their finances.  Mint really set the standard when it comes to user experience and beautiful design, while PageOnce pioneered mobile financial account aggregation and bill payment.  To deliver a world class consumer finance experience online today one needs to offer a product that looks, feels, and functions world class across web, mobile and tablet.

There has never been a better time to be a FinTech revolutionary, and hopefully these rules for revolutionaries provide some actionable insights for those seeking to make money in the money business.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Why The Financial Technology Revolution Is Only Just Getting Started

By brianascher

OccupyWallSt

The Occupy Wall Street protestors are gone (for now), but the real revolution against banking is still taking place at breathtaking speed, thanks to a new breed of technology entrepreneurs. The financial services industry, long protected by complex regulations, high barriers to entry and economies of scale, is ripe for disruption. Here’s my take on the macro environment, how consumer attitudes are changing and why technology and available talent make now the best time to challenge the status quo.

Global credit markets clamped shut in late 2008 and froze entire sectors of consumer credit. Mortgages became less available, millions of credit cards were revoked, lines of credit dried up, and banks essentially abandoned the small business and student loan markets. This left tens of millions of households in the position of “underbanked” (have jobs and bank accounts, but little to no credit) and the “unbanked” (no traditional banking relationship at all.)  This credit crunch fueled demand for startups like WongaBillfloat, and OnDeck Capital to establish themselves and grow rapidly, and the reloadable prepaid card market pioneered by GreenDot and NetSpend soared. While credit has eased for certain segments in certain markets, there are still big opportunities to fill credit voids, especially at the lower end of the market.

The last few years have seen significant changes in banking, payment, tax, investment and financial disclosure regulations. While complex legislation such as the Dodd–Frank Wall Street Reform and Consumer Protection Act is hardly intended to unleash entrepreneurial innovation, and virtually no single person can comprehend it in entirety, it does contain hundreds of provisions that restrict incumbent business practices, and typically when there is change and complexity there are new opportunities for those that can move quickest and are least encumbered by legacy. Other regulations such as the Check 21 Act which paved the way for paperless remote deposit of checks, and the JOBS Act crowd funding provision are examples of technologically and entrepreneurially progressive laws that create opportunities for entrepreneurs and tech companies. Inspired by the success of pioneers such as microfinance site Kiva and crowd funding sites like KickStarter and indiegogo, I expect that once the JOBS Act is fully enacted and allows for equity investments by unaccredited investors we will see a surge of specialized crowd funding sites with great positive impact on deserving individuals and new ventures.

Within a few weeks of Occupy Wall Street in Sept 2011, protests had spread to over 600 U.S. communities (Occupy Maui anyone?), hundreds of international cities (did I see you at Occupy Ulaanbaatar Mongolia?), and every continent except Antarctica. Regardless of what you think of such protests, it is safe to say that as a whole we are more skeptical and distrustful of financial institutions than virtually any other industry. Clay Shirky’s term “confuseopoly”, in which incumbent institutions overload consumers with information and (sometimes intentional) complexity in order to make it hard for them to truly understand costs and make informed decisions, is unfortunately a very apt term for the traditional financial services industry. There is thus a crying need for new service providers who truly champion consumers’ best interests and create brands based on transparency, fairness, and doing right by their customers.  Going one step further, peer-to-peer models and online lending circles enable the traditional practice of individuals helping one another without a traditional bank in the middle, but with a technology enabled matchmaker in the middle.  Perhaps the ultimate example of bypassing the mistrusted incumbents is the recent acceleration in the use of Bitcoin, a digital currency not controlled by any nation or central bank but by servers and open source cryptograpy.

As a Product Manager for Quicken back in 1995 I remember sweating through focus groups with consumers shaking with fear at the notion of online banking. Today it is second nature to view our bank balances or transfer funds on our smartphone while standing in line for a latte.  And while Blippy may have found the outer limit of our willingness to share personal financial data (for now), there is no doubt that “social” will continue to impact financial services, as evidenced by social investing companies eToro and Covestor. You can bet it will be startups that innovate around social and the incumbents who mock, then dismiss, then grope to catch up by imitating.

I think we will look back in 20 years and view the smartphone as a technical innovation on par with the jet plane, antibiotics, container shipping, and the microprocessor.  While the ever improving processing power and always-on broadband connectivity of the smartphone are the core assets, it has been interesting to see such widespread capabilities as the camera, GPS, and even audio jack used as hooks for new FinTech solutions.  While there are over a billion smartphones worldwide, the ubiquity of SMS service on virtually all mobile phones means that billions more citizens have mobile access to financial services 24×7 no matter how far they live from physical branches.  Cloud and Big Data processing capabilities are further fueling innovation in financial technology typified by the myriad startups eschewing FICO scores in favor of new proprietary scoring algorithms that leverage the exponential growth in data available to forecast credit worthiness.

Financial institutions have long employed armies of developers to maintain their complex back office systems but until recently the majority of these developers worked in programming languages such as COBOL which have little applicability to startups.  While COBOL has not gone away at the banks, more and more of the technical staff spend their time programming new features and interfaces in modern languages and web application frameworks that provide highly applicable and transferable skills to startups only too happy to hire them for their technical training and domain experience.  In addition, successful FinTech companies from the early days of the internet such as Intuit and PayPal have graduated experienced leaders who have gone on to start or play pivotal roles in the next generation of FinTech startups such as SquareXoom, Kiva, Bill.comPayCycleOutRight, Billfloat, and Personal Capital.

These are just some of the reasons now is a great time for financial technology startups and why venture capital is flooding in to the sector.  In my next post I will offer some suggestions for FinTech revolutionaries.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Will Disruption Choke Television Business Models?

By David Pakman

Here is the video of a panel I hosted at NATPE in Miami on January 28th. It features Rich Greefield from BTIG, Betsy Morgan CEO of The Blaze, Chet Kanojia CEO of Aereo, Alex Carloss from YouTube and Kevin Beggs CEO of Lionsgate. Great conversation about the disruptions facing the TV industry.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Facebook is No Longer Real-Time

By David Pakman

It’s a good thing Facebook is thinking of redesigning the News Feed. Because I think a funny thing is happening to Facebook. For me, the news feed no longer surfaces anything of interest. The opaque algorithm behind it is just not able to produce anything relevant and, more important, timely, at least to me. Facebook appears to be turning back into what it once was: a way to research people in non-real time. A look back into the past. A people-stalking product. It’s back to being a personal LinkedIn.

People publish stuff on the (increasingly mobile) web that is timely and relevant. Sharing baby pictures isn’t really one of those. Sharing pics of how you are experiencing life, which is the Instagram use case, is a great example of this. But my News Feed does not have anything like that in it. My Instagram feed does.

People share highly informative and timely links to news articles and blog posts on Twitter all day long. But my News Feed does not contain any of those. And when I share these types of posts on Facebook, I get no engagement. When I share pics of my kids, I get a lot.

People share bookmarks of products and apparel they want to buy on Pinterest all day long. People don’t do that on Facebook.

Facebook started as a non-real-time service. It was a way to check people out. In the face of the rise of Twitter, they responded aggressively with a News Feed product that showed promise. But now I feel they really screwed the filters up that govern that feed, which creates feedback to those of us who post into it and it feels like a vast river of noise and irrelevant posts from people and events who aren’t really relevant to me. Perhaps most importantly, I can’t tune it. The tuning mechanisms are either too subtle (“hide”) or too crude (“report as spam”). I feel powerless.

The irony is that LinkedIn is moving to increase daily engagement by syndicating highly informative posts from influencers. They are trying to become more real-time just as Facebook seems less so.

It’s still amazing for stalking people, though.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

TV Is Changing Before Our Eyes

By David Pakman

It’s finally happening. The Internet is taking over TV. It’s just happening differently than many of us imagined. There are two major transformations underway.

  1. The Rise of The Internet Distributors. Led by Netflix, the group of new distributors includes Amazon and Microsoft now, but maybe Apple and Google later. They are largely distributing traditional TV shows in a non-traditional way. All the content is delivered over IP and usually as part of a paid subscription or per-episode EST (electronic sell-through). Important to note that all of this content contains no advertising and is available entirely on-demand. This content falls into the “non-substitutional” cotent bucket. To watch it, you don’t need to be a cable TV subscriber.
  2. The Rise of Alternative Content Producers. Thanks to YouTube’s Channel strategy and investment in hundreds of content providers, new producers of content are emerging and offering non-traditional programming, usually in shorter form. This content is marked by dramatically different production economics than traditional TV content, taking advantage of an expanded labor pool and low-cost cameras and computer editing. This alternative content is chipping away at long- and mid-tail viewership on traditional networks (the “filler” and “nice-to-see” buckets.)

Both of these transformations are successful to date and will only become more-so. Rich Greenfield has a nice summary of why the TV industry suddenly loves Netflix. (Disclosure: I am long NFLX and have been a stockholder for some time.) The first transformation takes advantage of the massive pressure MVPDs place on traditional cable nets to not offer their programming direct-to-consumer. In this case, the HBO’s and AMC’s requirement that you authenticate your existing cable subscription in order to watch their programming over IP successfully persuades the cord-nevers to just avoid the programming on those networks until the hit shows are offered through Netflix or EST. Netflix, once again, looks like the hero. Those empty threats by Jeff Bewkes that he will never work with Netflix turned out to be, well, empty. The second transformation will take longer to fully prove out, but I believe it will happen. As more of our viewership takes place over IP, we lose our allegiance to networks as the point of distribution and allow new distributors to guide us towards content choice.

There is a third budding area of transformation, but I don’t yet see evidence that a business exists: trying to re-package cable TV bundles and sell them over IP. Companies like Aereo and Nimble.TV offer versions of this. I believe we live in a show-based world. Consumers aren’t looking for networks (with the exception of ESPN and regional sports nets) so much as they are looking for shows. Shows delivered over IP allow for the slow unbundling of television. One of the many challenges about this model for traditional broadcasters is that there is no advertising in this world. The traditional cable net business model enjoys two great revenue streams: affiliate fees and ad dollars. In IP-delivered shows, there are no ads.

Who are the winners and losers in this model? Well, show creators continue to flourish. The new distributors enjoy great success. Of course, ISPs, who are often the same companies as the MVPDs, do fine in the ISP business, but I believe the decline in total cable subs will continue. In a world where shows do not contain advertising, why do we need Nielsen? They have been a measurement standard for decades largely because advertisers needed a third-party validator of viewership. You can see why they have a vested interest in insisting TV ad viewership is not on the decline (despite everyone’s experience to the contrary.) I don’t think cable nets are in immediate trouble. They enjoy a great business model now, and also get to reap EST or licensing benefits after the shows air. But the Netflix House of Cards effort shows that consumers will now expect to be able to watch shows whenever they want and not be bothered by inconvenient broadcast schedules. The day is coming when the cable nets will have to respond.

For startups, one of the wide open spaces seems to be in cross-provider discovery. Now that my shows are spread among Netflix, Amazon, YouTube and on my DVR, I would prefer one interface to reach them all. Companies like Dijit’s NextGuide, Peel, Squrl, and Telly are taking cracks at this important space.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Penn Engineering 2007 Commencement Address

By David Pakman

It was an honor to be asked to address the 2007 Penn Engineering class as their commencement speaker. The video has been posted on YouTube for years, but I was recently asked to post the text. While it is several years old, I don’t believe the message is out of date.

Good afternoon. I know exactly what you are thinking; what is a guy that you have never heard of doing up here delivering your commencement address? Well, truth be told, I am wondering the very same thing. In fact, when Dean Glandt asked me to be here with you, my fist reaction was, “No. I have not accomplished enough to stand in front of such a distinguished crowd. What wisdom do I have to empart to them?” Well, I will do my best today to share something meaningful with you. Let me assure you though, this is not where I expected to be when I was sitting in your seat 16 years ago, thinking, “now what?”

2007Commencement12

When you got into Penn Engineering, your parents, like mine, probably breathed a sigh of relief. “At least he’ll have valuable skills and a career – and not just some vague liberal arts degree.” Well, I have some bad news for your parents. Engineering is the new liberal arts. It is the lingua franca of the next generation. Technology has become so pervasive, particularly in western cultures, that we engineers are no longer the geeks in the corner – we are now responsible for nothing less than the economic, media, and communication underpinnings of society. But the good news is, if you speak this new universal language – and all of you do – then your opportunities to contribute – not just to your own success but to society at large – are limited only by your drive, your desire, and your ideas.

When I sat in your seat 16 years ago, I of course knew exactly where I was headed. Had it all mapped out. I wanted to be a rock star – a drummer in a rock and roll band. Granted, that is not the most expedient path to becoming a CEO of a digital music company. But please don’t be misled by my title. Yes, I realize being a CEO opens some doors. It gives me the platform to accomplish things that I might never otherwise do. But CEO is the least important aspect of my career trajectory. It is representative of the fact that I have merged my two passions into my career. And that’s what I’d like you to think about today.

What are your passions and how can you incorporate them into your career? How can you utilize these newfound skills? How can today become a jumping off point for tackling the things you deeply care about?

When I graduated from Penn Engineering, I had two passions:  I was really into computers and I was really into music. Like many of you, I was tuned in constantly. I played in bands around campus and here in the greater Philadelphia area. I left the engineering lab as often as I could to practice and play gigs. Yes, I was a musician. But I was also an early adopter of technology. Penn helped open my eyes to that. It was clear where the music was headed – computers – to compose and mix, electronic drums, all the new tools of the trade. But I think I knew then that making a career out of my rock and roll aspirations was a long shot.

I came away with a couple of takeaways from this experience.  For one thing, I learned that I had somewhat radical intentions from a very early age. The straight and narrow probably was not going to work for me. But the biggest lesson – and the most empowering one of all – was that it is possible to do what you want to do. Maybe not play Madison Square Garden to 20,000 fans. But I was hopeful that I could combine my passion for music with my keen interest in technology.

So I took the same degree that you are receiving today and I went to work at Apple in California. At that time, Apple was still a huge underdog and its future was by no means certain. I fit in with the culture perfectly. Apple embodied the rebel mentality. It was, pardon the expression, marching to the beat of a different drummer. Working for an underdog and innovator like Apple was a great influence. I learned to “think different.” I learned that consumers will reward you for innovation. And most importantly, I learned that technology could be terribly disruptive to incumbent industries.

Remember the phrase “desktop publishing?” Because of the Macintosh and laser printers, an entire business was upended. Apple (and eventually Microsoft) reaped the benefit. It turned the print industry on its head. I saw a chance to take that very same disruptive psychology and apply it the music industry.

When I was a student here, Penn was an early contributor to the development of the Internet. It was clear that as information and entertainment became digitized, the businesses of distribution and retail of entertainment would be transformed. I already knew that music was my true north. So I devoted my career toward working to accelerate, and hopefully reap the benefits of this transformation in the music business.

After joining the first digital music company and then founding another, and trying multiple times to build a business which would be pivotal in the transition of the music industry, eventually, with some partners, we bought eMusic, an abandoned dot-com company in disarray. Long story short? We turned it around to become the number two digital music service in the world. Second only to my old company, Apple. It’s success is due to the fact that consumers, not the music industry itself, forced a format transition from physical goods to digital goods. All enabled by technology. While the incumbent music industry feared, and even ran from this inevitability, I welcomed the disruptive nature of technology and knew it would fundamentally alter the entertainment industry,

However, I don’t want to set up false expectations that if you stick with the drums, you’ll end up CEO of a music company. Dean Glandt did not ask me here today to talk to you about playing in a rock and roll band. So I asked myself, what can I possibly share with a group as educated and informed as you that would be original and have any possible value whatsoever? I labored over this and as I did, it struck me.  It’s not about technology or engineering. It’s about the disruptive nature of it.

You see, you all are sitting in the catbird seat for the next industrial revolution. You can join existing industries and work to build them bigger –  or you can be the disrupters. The shapers. The policymakers. Every last one of you can land a job in any technology role. At the biggest and most successful companies! You already speak the language. But is that enough? Do you want to get out of bed every day just to log on? Or do you take this incredible genius you possess – this mastery of bits and bytes – and use it for something that matters to you? Something transformative?

There is an ambassador who comes to mind who also got his start like I did, in music. His name is Bono. You’re probably sick of hearing about him. Why does a scruffy singer from a small country in the North Sea have so much clout on the global stage? Because he took a common language, mastered it, and made it his platform for change. It begs the simple question. What is your platform for change going to be? How will you disrupt?

I understand – you might be scratching your head and saying, “C’mon, it’s happened already. The billions have been made – with Microsoft, Google, Yahoo!, MySpace, YouTube. All the big bets have been placed. Everything has already been disrupted.” But in fact I don’t think that’s true. Those companies are just the building blocks for the next wave. These companies, these web players did not exist 30 years ago. No one knew where it was going back then and honestly, we don’t know, today. That’s where you come in.

How do you take these Goliathan companies and their all-encompassing technologies and turn them on their head? How do you wrap your arms around this knowledge and do something that no one has thought of yet? How do you take this “language” Penn Engineering has taught you and make it stand for something you care about?

My understanding of the digitization of music gave me an inkling that someday the songs I grew up with would be available in formats we could not imagine as kids. The model was changing and I saw that and embraced it and tweaked it and now I get to wake up every morning and spend my days guaranteeing that people can buy it. Any kind of music on any kind of player. Period. That’s what I believe in. That’s where I staked my tent.

Although I’m a computer scientist by degree, I am no quantum physicist or nanotech engineer. I didn’t invent something that is going to save the world. I foresaw a market trend in a field I was passionate about and was fortunate enough to get on board at the cusp of the transition. Sniffing out market trends? This is a very good skill to hone. And you’re not going to find it in any book. Turn to your instincts on this one.

Here are some more examples: Sergey Brin and Larry Page – the guys who figured out how to do “search” better? They got it.  Andreas Pavel? How many of you know THAT name. He and his girlfriend tested a new musical device he’d invented, on a snowy day in the Swiss Alps, listening to a Herbie Mann/Duane Allman composition – outdoors! – while they walked! The Walkman was born. Transformational! The way we listen to music has never been the same. And Steve Jobs can’t take all the credit on this one.

Nick Negroponte from MIT media Lab? One laptop Per Child! He is going to change the way children learn and he aims to do so one laptop at a time.

And it won’t just change the way children learn and think. It will change the way countries pull themselves out of poverty. The way emerging markets become self-sustaining. One man’s vision – and the language of technology – is going to change the lives of kids who never dreamt of having a chance – from Angola to Myanmar to Kazakhstan. These people are all using technology to disrupt the natural order, and making something better for consumers – for people – at the same time

Does this mean you have to invent the next big idea? if you have it, fantastic! But I think your mission is greater. You see, as I said at the outset, you are the new liberal arts generation. Technology is now omnipresent in society and you speak the common language. However, there are a lot of you speaking that language and believe me, the pack is closing in. You’re going to need more. You’re going to have to be aggressive, disruptive, and visionary.

I know many of you are thinking about the jobs you will start tomorrow. If I could spark one thought in you today, it would be to look five years out. Ten years out. Ask yourself, what are your kids are going to be listening to? What are they going to read, and watch? What’s their world going to look like? And how are you going to shape it? What industries are going to be completely disrupted by the inventions of today, and how can you, and society, benefit?

So I offer you a challenge. Look at yourself today, and ask what’s going to matter to you tomorrow. Which one of you is going to use your remarkable talent to feed Africa? Who’s going to tackle global warming? Does any one of you really believe, 20 years from now, that we’re going to still be running our cars on thick black crude pumped 2 miles out of the ground from a desert?

You are the 2007 graduating class of Penn Engineering. But engineering is merely the platform for the future. You will be more than engineers. You can engineer the shape of our society and shape the destiny of our lives.  You will be inventors. Designers. Architects. Engineers. But through your ideas and design and architecture, you will become the de facto policymakers of the 21st century. You will define our society, all because you understand technology better than everyone else.

Call it a grave responsibility, or the greatest road trip you’ll ever undertake. Either way, you are empowered. There is no turning back. You are truly on the launching pad.

In closing, I offer these words. Follow your passion. Question the status quo. Bang a few drums. Don’t be afraid to make some noise. Take this awesome new language you speak and use it. Put it to work. We truly are on the cusp of a revolution. Get out there and be disruptive. Be responsible and give a damn. And lead. Show us where we’re headed next. It really does matter.

 

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

The Very Curious Hybrid Boom

By Matthew Nordan

tl;dr: U.S. hybrid vehicle sales were up 61% in 2012. It’s unclear why.

Riddle me this: Why did U.S. hybrid sales take off last year?

Prior to 2012, hybrids looked like something between a fad and a niche. Sales peaked in absolute terms way back in 2007 and hybrid market share maxed out in 2009. Despite rising gasoline prices, it seemed that Americans cared neither about getting 50 miles per gallon or the environmental benefits thereof.

Then last year happened.

U.S. hybrid vehicle sales and market share, 1999 to 2012

Hybrid sales rose 61% to 434,498 cars in 2012 – the biggest absolute increase ever and the biggest percentage gain in seven years. Hybrids accounted for 3.0% of new vehicles sold, up 42% from 2011.

The big question: Why?

It wasn’t new choices. While nine new hybrid models were introduced in the States in 2012 (of a total 44 available), they accounted for only 9,708 hybrids sold (2.2%) – and the Prius took half the market like it has since 2009.

It wasn’t a price drop. Prius sticker prices fell $2,500 last year (about 11%) as Toyota restocked post-Fukushima, but prices of conventional non-hybrid cars from Japan dropped too.

It wasn’t higher gas prices. Retail gasoline prices were nearly flat from 2011 to 2012. (And if the gas price determined sales, hybrids should have peaked in 2008 and plummeted the year after; neither one happened.)

It wasn’t an improving economy. Real GDP growth was 2.2% in 2012 and 2.8% in 2010. Yet hybrid market share blew up in 2012 and shrank in 2010.

It wasn’t more driving. In fact, annual vehicle miles traveled per person fell slightly in 2012, extending a trend that started in 2004. “Peak car,” anyone?

U.S. gas price/GDP/vehicle miles traveled vs. hybrid market share, 1999 to 2012

None of these things correlate and it makes no sense! Any ideas?

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Immigration vs Innovation

By Richard Kerby

Immigration

The United States is one of the leading countries in technology innovation. Every day businesses all over the states think up new and creative was to tackle some of the world’s most difficult business challenges. Whether it’s inventing new platforms, creating more effective marketing strategies or developing mobile apps that are designed to make life easier, the U.S. is always looking to improve things. However, the current immigration laws have many worried that if things don’t change for the better, innovation will inevitably suffer.   

How Current Immigration Laws are Proving Problematic to Innovation

When H1-B visas were plentiful, the U.S. economy was on a steady incline. At the time, no one thought that the reason for this incline was related to the foreign workers. The truth is, foreign workers holding H1-B visas are responsible for creating more jobs for Americans. Critics believe that the influx of foreign workers are hurting the economy by minimizing job opportunities for U.S. citizens; however, there is much proof to the contrary.

The majority of American college graduates major in Liberal Arts, whereas technology and math skills are what’s needed to fuel technology; which foreign workers have.  According to Hamilton Place Strategies, 40% of founding roles in fortune 500 companies were created by highly skilled foreign workers and their children. 

So what results from the current visa cap? A loss of jobs and a loss of newly created positions, which will in turn hurt the economy and decrease U.S. technological innovation.  See below for a look at how the visa caps have trended over the last 30 years.

Transient

How Technology Companies Plan to Tackle the Issue

Technology companies in the U.S. are aware of the impact that the visa cap will have and have thus begun creating strategies to circumvent the current immigration limitations.

Some businesses have resorted to hiring foreign workers as “freelancers” working from home offices, while others have even more creative solutions. A company called Blueseed has decided to create a mobile office in the form of a ship to spark innovation. This ship will house a thousand of the most highly skilled innovators from several different countries and will be docked approximately 12 nautical miles from San Francisco (where the water is still considered “international”). This is meant to spark new ideas and create new start-ups that will eventually expand and inevitably end up in the United States.  For more info on Blueseed check out this great slideshare http://slidesha.re/12It42j.

As businesses begin to realize the real economic implications of the visa cap, multiple solutions will no doubt follow. This is needed to obtain the skills these foreign worker possess while offering them resources to continue creating more U.S. jobs.

Being the first person in my family born in the US, makes this a particularly important topic for me and I would love to hear everyone else’s thoughts and what other tactics you are seeing companies employ to get around the current immigration roadblocks.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Will The Readmission Rate Penalties Drive Hospital Behavior Changes?

By bkocher2013

This post first appeared in the Health Affairs Blog.

Since the development of the metric in 1984 by Anderson and Steinberg, inpatient hospital readmission rates have been used as a marker for hospital quality.  A good deal of attention is now being paid to the new readmission rate penalties in the Affordable Care Act (ACA).

While the penalties have garnered significant attention, it is unknown whether they will materially change hospital behavior.  In this post, after reviewing the mechanics of the penalties, we take a close look at how they are likely to affect hospital incentives.  We also suggest some refinements to the penalties that could help achieve the aim of reducing preventable readmissions.

How The Penalties Work

The readmission penalty in the ACA is based on readmissions for three conditions: Acute Myocardial Infarction (AMI), Heart Failure, and Community Acquired Pneumonia.  For each hospital, the Centers for Medicare and Medicaid Services (CMS) calculates the risk-adjusted actual and expected readmission rates for each of these conditions.  Risk-adjustment variables include demographic, disease-specific, and comorbidity factors.  The excess readmission ratio is the actual rate divided by the expected rate.

Simplifying a little, the aggregate payments for excess readmissions is summed for all three conditions over the past three years, then divided by total base operating DRG payments for the past three years to calculate the penalty percentage.  Base operating DRG payments are IPPS payment less DSH, IME and outliers except for new technology.  The total penalty is the penalty percentage times total base operating DRG payments for that fiscal year, provided that the total amount does not exceed 1 percent of base operating DRG payments in fiscal year 2013, a cap that increases to 3 percent in fiscal year 2015.

According to calculations by CMS, the overall penalty in 2013 is likely to be 0.3 percent of inpatient reimbursements, or $280 million, with 8.8 percent of hospitals receiving the maximum penalty.

Changes in hospital performance are predicated on these values being significant for hospitals.  The natural benchmark to the penalties that providers face is the amount that hospitals earn from potentially avoidable readmissions.  Readmissions in 2010 were estimated by CMS to cost Medicare $17.5 billion.  Studies suggest the avoidable portion of total readmissions ranges from 5 percent to 79 percent, with the median being 27.1 percent.

Multiplying the excess readmission rate by the average hospital inpatient EBITDA (earnings before interest, taxes, depreciation, and amortization) margin of 2 percent suggests that the current profit from readmissions above the avoidable portion is about $95 million.  Thus, the penalties appear to be greater than the profits.  This will be increasingly true in the future as Medicare moves to more bundled payments for acute care or global payments on a patient basis.  In such a payment system, there will be no profits for readmission, so the only financial consequence for the hospital will be the penalty.

Of course, behavior change is not binary — changes will happen along a spectrum.  Consider the fixed and variable costs for a hospital to address readmission rates.  There are many fixed costs in reducing readmissions, including the need for electronic medical records (EMR), adding non-fungible labor for case management and discharge planning, and training employees in discharge planning.  If the penalty on the hospital is high enough, the fixed cost inertia can be overcome.  Related, legislation is helping to reduce the size of these fixed costs by incentivizing investment in EMRs.

For a hospital that has overcome the fixed costs, there are also variable costs necessary for reducing readmissions.  These may include more nursing time per admission, additional supplies and post-discharge medications, and care coordination costs.  Variable costs are potentially offset if beds are filled by another admission that generates higher margins.

Refining The Penalties

The interplay of the fixed and variable cost factors suggests that movement for hospitals will not start until the fixed cost is overcome, but once that threshold is surpassed, those readmissions most easily prevented will be taken out of the system.  Rapid downward movement will then slow, as the net benefit shrinks for each patient.  These types of costs suggest a policy modification of the economic incentives in the future: a sliding scale based on the number of readmissions to make each additional readmission increasingly more expensive.

Another important component of the readmission calculation is risk adjustment.  Currently, the risk adjustment in the CMS penalty program includes only two demographic factors: age and gender.  Butstudies suggest that higher readmission rates are linked with race and location.  If we assume that lower-income populations face higher readmissions for reasons that are harder to prevent, the penalty will disproportionally affect hospitals that cannot realistically match their expected readmission rate.

To estimate the magnitude of this effect, we assumed that the readmission rate difference of 3.5 percentage points between minority and non-minority hospitals was purely structural.  If a hospital which would otherwise be at the national average faces this type of difference, the calculated penalty would be 18 percent of revenue for those admissions.  This would be inequitable and likely ineffective in reducing readmission rates, since it is targeting those readmissions that are unavoidable.

The extent to which hospitals reduce readmissions will depend on other factors as well.  The organizational structure of the hospital — for example, whether it is part of an Accountable Care Organization (ACO) and who leads the organization — could matter a great deal.  The fixed-cost base is much higher for hospital-owned ACOs than for physician-owned ACOs; parts of a hospital cannot be shut down, even if the bed is not filled.  For this reason, hospital-owned ACOs may be less focused on readmission rate reductions in the short run, but more focused over time as they make capital allocation decisions.

Public perception, through increased transparency by efforts like Hospital Compare, may also push hospitals to change behavior.  Studies show that hospitals have been shamed into improving mortality rates when they become measured.

In all, the new readmission penalty holds a good deal of promise.  But to maximize benefits to patients, cost savings, and rate of improvement, future refinements are needed to better align incentives and adjust for more patient characteristics.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Connected Security

By Richard Kerby

While at CES, I noticed a large amount of connected TVs and connected cars, which led me to think that these devices are ripe for breach from hackers, so I went digging and low and behold thieves in the UK have begun the looting process (http://bit.ly/10PWDyJ). Thus I felt it was a topic worth writing about.  Here are my 2 cents.

Why Does Connected TV Security Matter?

Cyber attacks to Android and iOS powered cell phones and computers have increased dramatically over the past five years.  Now that televisions are becoming “smarter”  by being powered through these platforms, the attacks have become more sophisticated. 

Increasingly, televisions are starting to incorporate smart operating systems which enable them to run wifi.  From here, criminals are able to hack into the system through an app.  Cyber criminals have already discovered a flaw in some Samsung smart TVs, which allows them to listen in and look into households through the television.

Do Cyber Criminals Really Care About Connected Cars?

Cyber criminals care about any smart device that allows them to gain access to your personal details.  They no longer just have an interest in stealing your login details to social media networks or your bank account information; cyber criminals are now interested in controlling your car, as well. 

Connected cars have wifi connectivity which enables the driver to access GPS, email addresses, and stream movies.  Some connected cars offer security features through connective devices which include the braking and door locking systems.  Cyber criminals could potentially hack into your connected car system, giving them the opportunity to take control of your engine speed, car security alarm,  wifi connectivity, door locking system, and your braking system. 

How to Make Sure You’re Safe

Manufacturers of connected vehicles have already been briefed on these threats and are working to create patches and encryptions which make it harder for potential cyber criminals to hack in. The U.S. Dept of transportation is also testing connected car devices to decrease their vulnerabilities. 

Downloading security apps and making sure your devices are password-enabled helps to decrease your risk of being hacked; however, some criminals are still able to bypass the systems. 

Covisinit, Windows, Cisco, McAfee are all devising ways to reduce the risk of cyber attacks through connected televisions and vehicles.  These measures include cloud services that restrict access to connected devices, beefed up security access, SSL encryption and authorization certificates.

Would love to hear further thoughts on the matter and what other companies out there I should be paying attention to?

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

A Different Strategy for Seed Investing

By Bryan Roberts

Spray and pray…. Try before you buy… Foot in the door… Take your pick, these describe the dominant seed investment strategy today in Silicon Valley.  The start-up world’s current angst around the “Series A crunch” is in great contrast to my seed experience, where most efforts progress to further financings and several are on their way to being standout successes in the Venrock portfolio.

For me, seed investing is not a low cost, little-time-required option on the A round.  It is a big investment of time and effort in order to be intimately involved in the formative stages of a company, despite the fact that the dollars-in and percentage ownership don’t hit usual venture fund metrics. Since the commitment front-runs the money and ownership, it is something we only do when we are so compelled by the people and the idea that we “have” to jump in long before it makes “traditional” sense.  Our mission is to help in whatever way we can, in hopes of increasing the company’s speed, likelihood and scale of success.  It also allows us to emphasize our approach as long term, supportive, performance oriented company builders.  Let’s face it, money is cheap, but time and effort are really expensive – for both entrepreneurs and venture capitalists.

A “deep involvement” approach requires making far fewer commitments than most others who have embraced seed investing in recent years – whether angels or venture funds. I have done about one a year across a variety of the more capital efficient healthcare subsectors – healthcare IT, diagnostics, and services.  Castlight Health and Ariosa Diagnostics are among several recent examples that illustrate our approach.

In mid-2008, I partnered with Todd Park (x Athenahealth, now CTO of the U.S.) and then Gio Colella (x RelayHealth), both of whom Venrock had funded previously, to explore the opportunity to create a company at the intersection of web – healthcare – consumer.  We worked for six months on the project and, in early 2008, seeded and incubated Castlight Health.  In that initial round, we invested $333,000 and proceeded to build the company brick by brick, eventually investing $17 million for nearly 20% ownership.  Over the last four years, Venrock has devoted every possible resource and connection possible – countless strategy sessions, customer meetings, management recruiting, follow on investor introductions, and the Board now includes a second Venrock partner in Bob Kocher, who still spends more than a day a week with the team.  Today, Castlight has the opportunity to become a pivotal participant in the creation of a functional healthcare delivery market, improving care while saving billions of dollars.

Ariosa is a similar story, but one whose roots are found in an unsuccessful Venrock seed investment.  We lost $300,000 after nine months in a diagnostic start-up when the CEO, John Stuelpnagel (x Illumina, a Venrock investment), came to the Board with the message that we all had better things to do than continue to push that particular rock uphill.  Soon thereafter, in 2009, we seeded the combination of a terrific first time entrepreneur/CEO in Ken Song, then at Venrock, with John as Executive Chairman to tackle new approaches to prenatal molecular diagnostics.  Three years later they are leading the race to provide an entirely new and improved standard of care to expectant mothers – where they can confidently assess genetic abnormalities with no risk to the baby at ten weeks of pregnancy.

Success in venture investing is really hard to come by, and with seed investing even more so. No matter what the strategy, there will be failures and even more pivots before those few that succeed become great. That said, as with Castlight and Ariosa, when it works, it is awesome.  You can assist in a company’s formative stage; create close and productive relationships with entrepreneurs; as well as build your ownership over subsequent financings. Early help in the project typically leads the team to want to work with no one else but you – in essence, you have become part of the family.   This month I made my seed investment for 2012 – a stealth company, also in the healthcare IT space. I think these entrepreneurs would tell you that our track record as active participants in prior seeds, as described by those CEO’s, was the over-riding factor in their decision to work with Venrock. We will do our best not to disappoint them.

In the end, our, and every VC’s, goal is to create great returns for our LP’s. We believe that a targeted, time intensive approach to seed investing is orthogonal to others’ and increases the chance of creating great companies by affording them resources early on that they would not get in other seed models.   But it requires a leap of faith and trust between entrepreneur and VC – a leap we are eager to take.

Comments   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Entrepreneur Tools: The Returns Analysis

By Matthew Nordan

tl;dr: To successfully target VCs, view your deal through their eyes.

pov2I got an outstanding piece of advice in my first job: “Always see the world from the other person’s point of view.”

If you’re trying to sign the pivotal customer, think from their perspective about what price they can accept. If you’re trying to recruit the killer engineer, understand how she weighs moving her kids when they’re halfway through elementary school.

And if you’re trying to raise capital from a VC – someone who invests other people’s money, and is out of a job if there’s insufficient return – analyze your own deal the same way he will.

I learned this the hard way.

In 2007 a somewhat younger and substantially less gray-haired Matthew was out raising a venture capital round for my previous company Lux Research. The good news is that it ended well – we were fortunate to bring on west coast VC firm Catamount Ventures, where partner Mark Silverman brought a rare combo of vision and pragmatism to the board. The bad news is that I wasted a lot of time pitching to firms that I should have known weren’t a good fit in advance, because the returns math couldn’t work for them.

My mission today is to arm you so you don’t make the same mistake.

When a VC investor hears your pitch, he’ll do math in his head to figure out if your company is in-bounds. (While bigger factors like team and market determine a “yes,” the math can rule out a “no.”) Typically, he’s answering two questions:

over90001) Can this investment move the needle? A venture investor can only attend to so many portfolio companies at once. To earn one of these limited slots, an investment has to be “needle-moving:” A successful outcome must be big enough in absolute terms to warrant a spot (regardless of the ratio of dollars out to dollars in).

As you can imagine, what’s needle-moving depends on the size of the fund that’s making the investment. A billion-dollar fund needs billion-dollar IPOs to return a profit; while investing $1 million into a company and getting $10 million back would yield a phenomenal 10x return multiple, you’d need 100 such outcomes just to break even! On the other hand, the same $10MM-for-$1MM return would be massive to a $10MM seed fund, where that single investment would put the fund in the black.

While there’s no magic number, a decent rule of thumb is that a needle-moving investment must return at least 10% of the fund to the VC in the success case. Here’s a close-to-home example: At Venrock we’re currently investing out of a $350MM fund. “Needle-moving,” according to this heuristic, is therefore $35MM. We tend to own 20% or so of the companies we invest in on average, so any one of them must be capable of being worth $35MM / 20% = $175MM when it’s bought or goes public – at an absolute minimum. If a successful outcome for your company would be getting acquired for $20-30 million dollars, you should not pitch me; target other investors with more appropriately-sized pools of capital who would view this outcome as a big win.

returnpaint2) Is the return multiple big enough? After assessing the absolute return, the math moves on to the return multiple, which is a relative measure. If everything goes right, how many dollars will I get out for each dollar I put in?

The return multiple that a VC investor seeks depends on the stage at which it invests, because of the time value of money: You earn about 8%/year if you make 2x your money over 10 years, but you could earn the same 8% by getting 1.08x in one year. As a result, early-stage investors (who invest at company founding and go 5-10 years before seeing an outcome) target higher returns than growth-stage investors, who aim to put in money shortly before the acquisition or IPO. Also, early-stage investors fund younger, riskier companies, most of which fail. Therefore they seek higher multiples in the success case than do growth-stage investors, who make some profit on most of the companies they back.

Again, there’s no magic number, but a good rule of thumb is that an early-stage VC needs to be able to envision a 10x return multiple if everything goes right. (A growth-stage investor, on the other hand, may see 2x to 5x as the target to hit.)

Armed with these principles, you can model the investment returns that a VC would get by putting money into your company, and use that information to target your investor search. Use the spreadsheet template that you can download here (which I’m archiving on the tools page) to do the math and model the return from the VC’s perspective. As inputs, you’ll need your financial projections (revenue, cost of goods sold, and opex); your capital plan (how much money you’ll need to raise and when); and a valuation metric (the spreadsheet uses price-to-sales, but you could also use price-to-earnings – in any case, set the metric by looking at comparable companies that have gone public or been acquired, and use a conservative consensus number in the model). What you’ll get out is the VC’s absolute return and return multiple.

As an example, consider this case:

returns_example3

Let’s say that this company is an energy analytics start-up trying to figure out if it should pitch to VC X, an early-stage investor with a $300MM fund. The company is raising a $10MM Series A aiming for a $15MM pre-money valuation, and thinks it will need another $25MM in two years to get to profitability. It believes it will have $80MM in revenue at year six, and an analysis of comparables shows that similar companies have been bought or gone public at 5x revenue. VC X would do half the A round ($5MM, purchasing 20% ownership) and invest its pro rata amount of the round to follow (i.e., 20% of the $25MM B round = $5MM more in two years), for $10MM invested over the life of the company (if everything goes right).

The good news is that, from VC X’s perspective, the investment clears the “needle-moving” hurdle. If the company hits its $80MM revenue target in six years, it’s worth $80MM x 5x = $400MM; VC X will own 20%, so its absolute return of $80MM is well above 10% of VC X’s $300MM fund size.

The bad news is that VC X can’t quite see its way to a 10x return. It’s going to put in $10MM in total ($5MM now and $5MM later) for an $80MM absolute return, yielding a multiple of $80MM / $10MM = 8x. This is good, but not excellent if it’s an upper bound; if it represents a true maximum it may not be enough. There would likely need to be compensating positive factors (phenomenal team, opportunity to expand to other markets, a pivotal early partner, demonstrably active acquirers) for this opportunity to compete against others.

A secondary point worth noting: The $10MM invested over the life of the company would be 3.3% of VC X’s fund – big enough to be a “real” investment worth a partner’s time, but not so large that it sucks up too much of the fund (VCs generally avoid putting more than 5%-10% of a fund behind any one company).

When you go through this exercise, run multiple scenarios – the VC you’re pitching certainly will! See what things look like with a slower revenue ramp, a lower valuation metric, a higher capital requirement (Venrock lore holds that companies typically require 2.5x more money over their lives than they anticipate at first fundraising), etc. However, I don’t recommend putting this kind of analysis into your pitch deck – it presupposes too much knowledge of the other party’s motivations and comes off as kind of arrogant. Keep it to yourself and use it to inform your financial plan.

Hopefully this tool will equip you for more successful fundraising. Let me know your feedback, and please point out my inevitable Excel errors for correction in an update…

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

How Three Federal Initiatives Are Set To Transform U.S. Health Care

By bkocher2013

This post first appeared in Forbes.

The U.S. health care system today is fraught with wasteful spending that does not contribute to better outcomes for patients. The U.S. spends more on health care than any other country in the world on a per capita basis. We spend more on healthcare alone than the entire GDP of France, however average patient outcomes in the U.S. are on par with Cuba and Slovenia, in spite of newer hospitals and more varied technologies.

Three federal initiatives — the HITECH Act, the Health Data Initiative (HDI) and the Affordable Care Act (ACA) — are designed to improve clinical quality and the patient experience, and make health care more affordable. As a result of these changes, several trends are emerging, including movement toward outcome-based payments, higher labor productivity, decreased demand for hospital-based care and better, more efficient consumer markets.

Four major trends are driving these benefits for consumers and employers:

Outcome-based payments

As we move from paper-based to digital, we are able to change what patients buy, how payors pay, and how doctors are reimbursed for care. Outcome-based payments increase the importance of care coordination, so providers will need increased technological capabilities to share data, form care teams, and perform predictive modeling to figure out which patients are at higher risk. Consumers benefit greatly from these models because a doctor’s success will be contingent on their patients doing better and spending less money on unnecessary services. It will also lead to more convenience for patients because doctors will want to see them on nights and weekends rather than send patients to the emergency room or readmit them to hospitals. Emails will also not go unanswered when it is in the doctor’s interest to make sure patients know what to do.

Higher productivity

Astonishingly, healthcare has not been able to replicate the productivity gains of the broader U.S. economy over the last twenty years. As we age and expand coverage in an era where prices cannot consistently increase faster than GDP, health care providers will need to creatively address and improve labor productivity. Almost every other sector of the U.S. economy has improved labor productivity, achieved better value and, in some cases, reduced prices. Health care providers who develop more productive ways to deliver care should improve margins, gain market share and improve the competitiveness of their businesses. This is good for patients, payors, and providers as waste is eliminated and reliability improves.

Lower demand for hospitals

Even considering the nation’s aging demographic, most hospitals will continue to have excess capacity. As reimbursement systems increasingly reward cost efficiency and reductions in readmissions and complications, many markets may become over-bedded. Additionally, alternative, less expensive treatment settings should become more common, including urgent care centers, high intensity primary care and extensivist models, and home-based care models as remote monitoring is perfected and new therapies continue to shift care from inpatient to outpatient settings. Those patients left in hospitals will increasingly be limited to the most complex patients.

Better functioning markets

Millions of newly covered health care consumers will join the ever-increasing numbers who already have high cost-sharing health plans. Even in subsidized Silver-level Exchange plans consumers will have 30% cost sharing up to their out-of-pocket maximum if their incomes are about 350% of the federal poverty level. This group will be sensitive to differences in price and value, resulting in a more engaged patient consumer base and will cause the health plan and provider marketplaces to become more competitive. Patients will have access to more data and new applications to help them shop for health care based on price, quality, and convenience; hospitals may compete on outcomes and experiences; doctors will seek to differentiate their services and likely further specialize around particular conditions and types of patients they are expert in caring for.

Health care reform will also cause the roles of health plans, hospitals and doctors to evolve in several ways:

Health plans will offer a more rewarding member experience

Health plans are already responding to these trends by creating more engaging, consumer-centric ways to get people to care about, and improve, their health, such as sponsoring wellness programs and member education; developing their own health care delivery systems which offer unique member care experiences; and offering analytic support for providers to help them better achieve population health goals cost effectively.

Hospitals will have to compete on care outcomes and total value

In a health care system where consumers have easy access to information on hospital cost, quality and patient experience, it may be difficult for hospitals to compete on factors such as the newness of the facility or the breadth of services offered and may instead need to compete on their ability to deliver superior outcomes at a better cost. We anticipate that all hospitals will want to take action to assure that their doctors adhere more often to evidence-based practices, comply with cost-effective care processes, and support efforts to reduce complications and readmissions.

There will be a race to employ doctors

Today, slightly more than half of all practicing doctors in the U.S. are employed by hospitals. Employing physicians helps hospitals by enabling them to better manage what happens to patients before and after the hospital, encourage the use of cost-effective supplies and medical devices, implement clinical pathways, and work well in team-based care models. Health plans are also beginning to employ doctors to better manage risk, enable population health management, and help create unique products and, perhaps, specialized delivery systems.

Bottom line: the ACA, the HITECH Act, and the HDI are moving the U.S. toward a health care system that can be more cost effective, more accessible, and deliver better outcomes. These initiatives, coupled with the new innovations emerging in health care IT, will help drive this change and are making it an exciting, and better, time in health care for consumers, entrepreneurs and investors alike.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Creating Outcome-Driven Health Care Markets

By bkocher2013

This post first appeared in the Health Affairs Blog.

America’s health care market does not work well.  It is inefficient, asymmetric, and in most cases not particularly competitive.  The Affordable Care Act (ACA) legislated a myriad of changes to reform and improve insurance markets with exchanges as a centerpiece.  While exchanges and reforms like subsidies, guaranteed issues, age bands, community rating, reinsurance, and risk adjustment are all helpful, a huge opportunity remains to segment the health care market around different categories of patient demand.

Basic economic theory states that a well-functioning market is aligned between supply and demand.  Ideally, suppliers and customers align around the preferences of the customers – the unit of alignment is driven by the demand side.  When we examine health care, we see demand falling into three segments:  healthy people who have episodic needs, chronic disease patients with predictable needs, and highly complex patients with less predictable needs.  Given the high variance between the three submarkets, we believe that each of these segments should be thought of as a discrete market and served by different types of insurance products, payment models, and health care providers.

We believe that this is necessary since each of these segments values providers differently.  For a healthy patient with periodic needs, convenience and experience are likely to matter more than continuity with a provider and care team.  Conversely, chronic disease patients are likely to value clinical outcome attainment, complication avoidance, and care coordination very highly.  And complex patients will need and value the customization, access to research, and specialization that the latest medical breakthroughs can deliver.  Not only are the sources of value different, but so are the delivery systems and payment models needed to align incentives for value.

Re-Envisioning The Health Care Market

Healthy patients.  For healthy patients with periodic needs, an episodic approach is also the most economically efficient.  These patients do not require the fixed cost of a large system of care and instead should purchase discrete specific services — ideally a bundle of care to deliver a specific pre-defined outcome.  In this model, a patient buys insurance and broad access to providers and, when a health need arises, receives a budget for his or her episode of care.  We favor reference-based pricing so the patient can purchase an episode outcome without additional cost sharing while retaining the option to pay more if he or she chooses.  The market is thus incentivized to manage to a specific outcome in the most cost-effective way and to compete on delivering extra value for those patients who are willing to pay more.

To meet demand for bundled payments organized around episode outcomes, the supply-side should realign into specialized care units that focus on a few procedures, organ systems, or disease areas — a broad PPO network.  This has historically proven successful for elective conditions: the Dartmouth Spine Center has a surgery rate of 10 percent –- lower than the national rate –- with 100 percent of patients reporting their needs were well met.  Other examples of episodic providers include ambulatory surgery centers, orthopedic and cardiovascular specialty hospitals.  We also foresee capitated systems managing population health procuring discrete episode services from specialty providers, since these providers should be able to offer equal or better outcomes at lower prices than an Accountable Care Organization (ACO), integrated delivery system, or multispecialty group.

Patients with chronic disease.  For chronic disease patients, the primary outcome goal is to minimize a condition’s short-term inconveniences and long-term complications.  The market should thus incentivize a long-term perspective centered on patient engagement, adherence, and side-effect prevention.  On the demand-side, customers should purchase care from a provider able to care for all aspects of a patient’s condition, which creates incentives for all players –- patients, doctors, providers, and drugmakers –- to manage cost.  The basis of competition should be the ability to deliver annual health and complication avoidance at lower costs.

In this model, incentives are most aligned when providers are paid using a risk-adjusted capitated payment.  To compete, providers should organize in organizations such as ACOs, Patient-Centered Medical Homes (PCMHs), multispecialty groups, or integrated delivery systems with strong capabilities in managing risk, population health, and costs.  Examples of these types of systems are Group Health, Geisinger, Kaiser Permanente, Healthcare Partners, and CareMore.  In this model, incentives for patients to adhere to treatment plans, and remain in the system of care, are reinforcing.

Complex patients.  Finally, there are certain conditions that are too complex to fit into either market, such as complex cancers, high acuity conditions, and rare diseases.  These conditions often exhibit both chronic condition and episodic characteristics and are best managed by academic medical centers or high-acuity specialty facilities like comprehensive cancer centers and children’s hospitals.

Our view is that the current fee-for-service system is the best approach for handling these cases.  To constrain inflation and encourage competition, fee for service should be coupled with utilization review, incentives to use evidence-based care, and transparency around risk-adjusted outcomes and expected out-of-pocket costs.  Paying for these as episodes will not work because high patient heterogeneity exists and the size of the market cannot support competition at the episode-level.  Moreover, it is hard to define quality and value for many of these types of patients and conditions.

The Way Forward: Turning Theory Into Practice

These payment models and provider organization approaches maximize value by encouraging healthy patients to get their conditions fully resolved for a fixed price, chronic disease patients to access a care team rewarded for avoiding complications, and complex patients to receive customized care and access from specialists.  Furthermore, each of the three submarkets –- healthy patients, chronic disease, and complex and rare conditions –- is large enough to be self-sustaining and attractive.  We estimate that the chronic condition market is $1.1 trillion, the episodic care market is $760 billion, and the residual fee-for-service complex and rare conditions market is $900 billion in 2011.

The three markets are growing at similar rates.  (See exhibit 1, click to enlarge.)  If the economics are aligned, they will also be able to create growing value for patients through productivity gains, falling prices, better outcomes, and far better patient experiences.  Fortunately, each of these markets and provider models exist today in many geographies.  They are just not widespread enough or coexistent.

Sahni-Kocher-Exhibit

Giving consumers more insurance options.  Transforming theory into a tangible system presents certain challenges, which we believe will be overcome in the next few years.  First, patients need insurance product choices.  The advent of state exchanges and community rating are catalytic events that could lead to this reorganization if exchanges permit reference-based pricing plans and allow integrated delivery offerings with narrow networks.  The employer market is already moving down this path with the marked increase in defined-contribution health benefits supported by private exchanges, where higher cost sharing and narrow network plans are often offered.  We are also seeing many more employers shifting to reference-based pricing and episode bundling approaches for elective conditions to rewards employees for selecting high-quality, lower-cost providers, and to encourage providers to offer a full course of care for a bundled price.

Provider restructuring.  On the provider side, theoretically overhead should not increase, but should decrease.  The largest providers that can pull in adequate populations will focus on patient and population health, a trend already being seen with groups like Partners Healthcare shifting to an ACO and capitated payment orientation.  Competition will also lead to emergence of more specialized providers for acute and episodic care among community hospitals.  Already for complex and rare conditions, regional centers like the Mayo Clinic and traditional academic medical centers exist.  The push to submarkets should accelerate the provider landscape transformation and reduce the extraneous providers that lack focus and a niche.

The biggest barrier today is linking benefit designs and reimbursement models with patient segments.  Once commercial payers approach providers with products that segregate patients into these segments with corresponding reimbursement, providers will rapidly reorganize to serve the segments that they are most competitive at supporting.  While this approach does generate more ACOs and PCMHs, the past year has shown that these can be formed relatively quickly to meet demand.  The emergence of retail-oriented primary care providers also indicates that episodic care models are able to proliferate and scale in response to demand.

Addressing changes in consumer health needs.  One additional challenge will be how patients react when health needs change mid-year.  All patients regardless of submarket will have certain basic aspects:  insurance, preventive care, and consumer protections.  The value in longitudinal care is irrespective of submarket and hugely valuable to reducing the growth of health care costs. As health needs adjust for patients during the year, we see two potential solutions. First, patients will still have access to other submarkets to receive the necessary care.  Second, the pool of patients who will need product adjustments will be significant, and the value cannot be ignored by payers. Thus, some supplemental plans may emerge that enable patients to gain access to additional types of providers.

Matching patient needs and demand with specific types of providers and reimbursement approaches is better for patients.  If incentives are aligned with the types of value desired by different types of patients, price increases should no longer outpace value creation, and providers will compete and differentiate in ways that are most valued by their core patient constituencies.   Doing this through the creation of well-functioning submarkets — instead of forcing a single, ill-functioning market — should also unleash productivity gains as providers specialize around narrower segments and stop investing in services that they do not do well, do at scale, or need.

Overcoming the barriers will be a significant challenge.  However, we are already seeing some shifting in the health care landscape, in addition to certain provisions in the ACA which will come online in the upcoming years and add further movement.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Creating Outcome-Driven Health Care Markets

By Bob Kocher

(Post co-authored by Nikhil Sahni and Bob Kocher.  A version of this post also appeared at HealthAffairs Blog.)

America’s health care market does not work well.  It is inefficient, asymmetric, and in most cases not particularly competitive.  The Affordable Care Act (ACA) legislated a myriad of changes to reform and improve insurance markets with exchanges as a centerpiece.  While exchanges and reforms like subsidies, guaranteed issues, age bands, community rating, reinsurance, and risk adjustment are all helpful, a huge opportunity remains to segment the health care market around different categories of patient demand.

Basic economic theory states that a well-functioning market is aligned between supply and demand.  Ideally, suppliers and customers align around the preferences of the customers – the unit of alignment is driven by the demand side.  When we examine health care, we see demand falling into three segments:  healthy people who have episodic needs, chronic disease patients with predictable needs, and highly complex patients with less predictable needs.  Given the high variance between the three submarkets, we believe that each of these segments should be thought of as a discrete market and served by different types of insurance products, payment models, and health care providers.

We believe that this is necessary since each of these segments values providers differently.  For a healthy patient with periodic needs, convenience and experience are likely to matter more than continuity with a provider and care team.  Conversely, chronic disease patients are likely to value clinical outcome attainment, complication avoidance, and care coordination very highly.  And complex patients will need and value the customization, access to research, and specialization that the latest medical breakthroughs can deliver.  Not only are the sources of value different, but so are the delivery systems and payment models needed to align incentives for value.

Re-Envisioning The Health Care Market

Healthy patients.  For healthy patients with periodic needs, an episodic approach is also the most economically efficient.  These patients do not require the fixed cost of a large system of care and instead should purchase discrete specific services — ideally a bundle of care to deliver a specific pre-defined outcome.  In this model, a patient buys insurance and broad access to providers and, when a health need arises, receives a budget for his or her episode of care.  We favor reference-based pricing so the patient can purchase an episode outcome without additional cost sharing while retaining the option to pay more if he or she chooses.  The market is thus incentivized to manage to a specific outcome in the most cost-effective way and to compete on delivering extra value for those patients who are willing to pay more.

To meet demand for bundled payments organized around episode outcomes, the supply-side should realign into specialized care units that focus on a few procedures, organ systems, or disease areas — a broad PPO network.  This has historically proven successful for elective conditions: the Dartmouth Spine Center has a surgery rate of 10 percent –- lower than the national rate –- with 100 percent of patients reporting their needs were well met.  Other examples of episodic providers include ambulatory surgery centers, orthopedic and cardiovascular specialty hospitals.  We also foresee capitated systems managing population health procuring discrete episode services from specialty providers, since these providers should be able to offer equal or better outcomes at lower prices than an Accountable Care Organization (ACO), integrated delivery system, or multispecialty group.

Patients with chronic disease.  For chronic disease patients, the primary outcome goal is to minimize a condition’s short-term inconveniences and long-term complications.  The market should thus incentivize a long-term perspective centered on patient engagement, adherence, and side-effect prevention.  On the demand-side, customers should purchase care from a provider able to care for all aspects of a patient’s condition, which creates incentives for all players –- patients, doctors, providers, and drugmakers –- to manage cost.  The basis of competition should be the ability to deliver annual health and complication avoidance at lower costs.

In this model, incentives are most aligned when providers are paid using a risk-adjusted capitated payment.  To compete, providers should organize in organizations such as ACOs, Patient-Centered Medical Homes (PCMHs), multispecialty groups, or integrated delivery systems with strong capabilities in managing risk, population health, and costs.  Examples of these types of systems are Group Health, Geisinger, Kaiser Permanente, Healthcare Partners, and CareMore.  In this model, incentives for patients to adhere to treatment plans, and remain in the system of care, are reinforcing.

Complex patients.  Finally, there are certain conditions that are too complex to fit into either market, such as complex cancers, high acuity conditions, and rare diseases.  These conditions often exhibit both chronic condition and episodic characteristics and are best managed by academic medical centers or high-acuity specialty facilities like comprehensive cancer centers and children’s hospitals.

Our view is that the current fee-for-service system is the best approach for handling these cases.  To constrain inflation and encourage competition, fee for service should be coupled with utilization review, incentives to use evidence-based care, and transparency around risk-adjusted outcomes and expected out-of-pocket costs.  Paying for these as episodes will not work because high patient heterogeneity exists and the size of the market cannot support competition at the episode-level.  Moreover, it is hard to define quality and value for many of these types of patients and conditions.

The Way Forward: Turning Theory Into Practice

These payment models and provider organization approaches maximize value by encouraging healthy patients to get their conditions fully resolved for a fixed price, chronic disease patients to access a care team rewarded for avoiding complications, and complex patients to receive customized care and access from specialists.  Furthermore, each of the three submarkets –- healthy patients, chronic disease, and complex and rare conditions –- is large enough to be self-sustaining and attractive.  We estimate that the chronic condition market is $1.1 trillion, the episodic care market is $760 billion, and the residual fee-for-service complex and rare conditions market is $900 billion in 2011.

The three markets are growing at similar rates.  (See exhibit 1, click to enlarge.)  If the economics are aligned, they will also be able to create growing value for patients through productivity gains, falling prices, better outcomes, and far better patient experiences.  Fortunately, each of these markets and provider models exist today in many geographies.  They are just not widespread enough or coexistent.

Giving consumers more insurance options.  Transforming theory into a tangible system presents certain challenges, which we believe will be overcome in the next few years.  First, patients need insurance product choices.  The advent of state exchanges and community rating are catalytic events that could lead to this reorganization if exchanges permit reference-based pricing plans and allow integrated delivery offerings with narrow networks.  The employer market is already moving down this path with the marked increase in defined-contribution health benefits supported by private exchanges, where higher cost sharing and narrow network plans are often offered.  We are also seeing many more employers shifting to reference-based pricing and episode bundling approaches for elective conditions to rewards employees for selecting high-quality, lower-cost providers, and to encourage providers to offer a full course of care for a bundled price.

Provider restructuring.  On the provider side, theoretically overhead should not increase, but should decrease.  The largest providers that can pull in adequate populations will focus on patient and population health, a trend already being seen with groups like Partners Healthcare shifting to an ACO and capitated payment orientation.  Competition will also lead to emergence of more specialized providers for acute and episodic care among community hospitals.  Already for complex and rare conditions, regional centers like the Mayo Clinic and traditional academic medical centers exist.  The push to submarkets should accelerate the provider landscape transformation and reduce the extraneous providers that lack focus and a niche.

The biggest barrier today is linking benefit designs and reimbursement models with patient segments.  Once commercial payers approach providers with products that segregate patients into these segments with corresponding reimbursement, providers will rapidly reorganize to serve the segments that they are most competitive at supporting.  While this approach does generate more ACOs and PCMHs, the past year has shown that these can be formed relatively quickly to meet demand.  The emergence of retail-oriented primary care providers also indicates that episodic care models are able to proliferate and scale in response to demand.

Addressing changes in consumer health needs.  One additional challenge will be how patients react when health needs change mid-year.  All patients regardless of submarket will have certain basic aspects:  insurance, preventive care, and consumer protections.  The value in longitudinal care is irrespective of submarket and hugely valuable to reducing the growth of health care costs. As health needs adjust for patients during the year, we see two potential solutions. First, patients will still have access to other submarkets to receive the necessary care.  Second, the pool of patients who will need product adjustments will be significant, and the value cannot be ignored by payers. Thus, some supplemental plans may emerge that enable patients to gain access to additional types of providers.

Matching patient needs and demand with specific types of providers and reimbursement approaches is better for patients.  If incentives are aligned with the types of value desired by different types of patients, price increases should no longer outpace value creation, and providers will compete and differentiate in ways that are most valued by their core patient constituencies.   Doing this through the creation of well-functioning submarkets — instead of forcing a single, ill-functioning market — should also unleash productivity gains as providers specialize around narrower segments and stop investing in services that they do not do well, do at scale, or need.

Overcoming the barriers will be a significant challenge.  However, we are already seeing some shifting in the health care landscape, in addition to certain provisions in the ACA which will come online in the upcoming years and add further movement.

Comments   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Installing a Nest, Investing in Nest

By Matthew Nordan

tl;dr: Nest Labs performs magic – making energy efficiency awesome, even for the nontechnical and non-green. We’re delighted to invest in the company.

Back in February I acquired a Nest Learning Thermostat, famously designed by Apple’s original iPhone team. Looks cool! Learns your habits! Controlled from your phone! At the time, I felt the product was something of an overhyped fetish object for energy nerds, but I was happy to get one as I sit squarely in that demographic. (Plus, while the Honeywell thermostats in our home were nominally programmable, their interface was so obtuse that we never set them and thus wasted money.) Behold the tweet:

Just arrived. Wish me luck on the install. yfrog.com/mmcaukbj
Matthew Nordan (@matthewnordan) February 14, 2012

And then it sat in the box for four months.

It’s not that I didn’t want a beautiful piece of industrial design on my wall – it’s that I believed installing a thermostat was a perilous project that would consume a weekend afternoon. Every time Mrs. Nordan and I thought “you know, we really ought to put that thing up,” we quickly found a reason to do something else. And so it sat in the box.

Until June, when my colleague Matt Trevithick came over for dinner and asked me how we liked our Nest. I sheepishly responded that it hadn’t made it to the wall. Matt assured me that the setup was super-easy (he had one) and declared that we’d be doing the installation that very second.

Mrs. Nordan and I accepted the challenge. The ground rules: she’d 1) do the whole thing, 2) use only what came in the box, and 3) time it (it’s supposed to be a half-hour job). If you want the details, see this photo log, but the bottom line is that Matt was right. The entire process was simple; every conceivable thought was given to user-friendliness (down to the bubble level built into the backplate, so as not to install the thing crooked); and the network and app connectivity “just worked.” We clocked in at 22 minutes for the install, followed by 15 minutes’ worth of software updates (that’s what I get for waiting months to activate the thing).

Done. Niche nerd product: operational.

But in the two months that followed, it became clear that this was not a niche nerd product. I had underestimated Nest. As I used the thing, I saw that:

  • The experience was legitimately great. Our old beige box seemed out of place in a house otherwise filled with stainless steel; the Nest just looked better. We never programmed the old box because it was so awkward; the Nest had an iPad app. Even basic stuff was better – the AC would kick right in after we set the dial on the Nest instead of incurring a mysterious delay like before. I’d thought that setting the temperature from your phone was a stupid idea, until I found myself doing exactly that when I’d land at the airport late at night and wanted the downstairs to be cool before I got home.
  • It appealed to non-techies and the non-green. Most people who walked into our house and saw the thing wanted one, even those lacking a sustainability gene; they looked at it like an iPhone, not a climate controller. It became a living room conversation piece, like a stereo in the 1960s (I think of Pete on Mad Men). When Mrs. Nordan – a deeply nontechnical gadget-phobe – decided that it would make a great Christmas gift, the breadth of appeal became clear.
  • It delivered energy efficiency effortlessly. Nest’s default mode is to learn your schedule and make your house more efficient – thus saving you money – without you having to do anything. Little things that are transparent to the user, like turning on just the fan instead of the A/C compressor when it makes sense to do so, simply happen; you don’t have to know (or care). Every Nest household is a potential demand response node that doesn’t require the utility to roll a truck. Combined, those homes are a trove of fine-grained data that can be used to target retrofits.

I concluded that I wasn’t looking at a better thermostat. This was something else: the reinvention of an unloved category via thoughtful design. Nest was to its ilk what the Prius was to cars, what Tivo was to VCRs, or – best comparison – what Dyson was to vacuum cleaners (20% market share in the U.S. at 4x the average price point just three years after introduction). And if this team could make a thermostat (of all things) into an engaging product, who knows what else they’d come up with?

Shortly thereafter our energy team at Venrock evaluated Nest Labs as a venture investment. I’ve written before in this space that the smart grid has been a failure for consumers because it’s all too complicated and no one cares. To change the input/output ratio of consumption, the experience has to be awesome, winning on merits instead of getting by on shame. Having looked at this field for many years we’ve seen a ton of consumer energy propositions; Nest was the first one to clear this essential bar.

We’re pleased that Venrock is investing in Nest and backing a phenomenal team with an expansive vision. Matt, Ray Rothrock, and I – all of whom happened to be users before we were investors – are on the case. For her part, Mrs. Nordan has the second Nest unit going in upstairs.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Nest Installation Photo Log

By Matthew Nordan

tl;dr: Mrs. Nordan vs. Nest thermostat! Will it take an afternoon to install? Will we ruin our house in the process? No on both counts!

In June we installed a Nest Learning Thermostat in chez Nordan; I helmed the camera for the obligatory unboxing post, but promptly got occupied with other things and left the pics to languish on my laptop. Better late than never? For context on why I’m posting this months after the fact, see “Installing a Nest, Investing in Nest.”

Challenge accepted.

What’s in the box. What you can’t see here (because we were speeding through it and/or I am a lazy photographer) is that the screwdriver, wall screws, anchors, etc. needed to get the thing mounted are in the box too.

On the chopping block: The inscrutable Honeywell thermostat that we’re replacing.

Honeywell with the faceplate off. See those wires? They provide power and talk to the HVAC system. Our mission is to get them plugged into the right spots on the Nest.

The Nest box includes little labels to wrap around the wires as you unplug them, so you won’t forget which is which when you have to plug them back in. We duly attach them.

Penciling in holes where we’ll put the anchors for the wall screws. More user-friendliness: Note that the Nest’s backplate has a level built into the front of it so you won’t mount the thing crooked.

Drillin’. We probably could have just bored a hole, but, you know, completeness.

In go the anchors.

Attaching the backplate.

Good. Now time to plug the wires into the tabs. The labels on the wires have letters on them that match up to the tabs, so even we can’t screw this up.

Fiddling…

Done. Once the wires are plugged in you moosh them back before attaching the faceplate.

Faceplate goes on…

…and we’re up! Note that the display doesn’t actually look like that – it looks like a normal LCD display – it just showed up with these artifacts when captured through my camera.

Connecting the Nest to WiFi. Once we’ve done this, the stopwatch reads 22 minutes, at which time we’re done with the physical install. But, this being 2012, we wouldn’t be done without…

…software updates, of which we got three, totaling 15 minutes altogether; the Nest rebooted itself between each. This was the only annoying part of the installation and one that took longer than I expected (how big can a thermostat firmware update be?) I presume that if we hadn’t waited four months between getting the thing and installing it we wouldn’t have had three of these in a row. While it’s downloading, let’s get the iPhone app running:

App store entry. Confidence-inspiring rating.

On its way…

The app finds the Nest automatically; we have to click the thermostat itself to complete the enrollment. (I find myself wondering if/how/when this could be hacked. Be ever vigilant, Nest Labs.)

Success!

Total time: 37:09.6, roughly 22 minute install + 15 minutes of software updates.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

What Makes a Great Cleantech Team?

By Matthew Nordan

tl;dr: Winning cleantech start-up teams are complete at founding, have strong pre-existing relationships, and include the inventor of the core technology.

This post was co-written with Josh Rogers, a former Venrock intern who’s now in National Grid’s Strategic Planning and Corporate Development group. A version of it also appeared at GigaOM.

A year ago I published a post called “What It Takes to Build A Cleantech Winner” based on an analysis of 18 cleantech success stories – venture-backed start-ups that executed big IPOs. The conclusion was that it’s not the technology (the best one rarely wins) and it’s not the market (if the market’s already big and attractive, you’re probably too late); instead, it’s the team that determines success.

That begs the question: What makes a great team?

To answer this question, you’d need to do two things. First, you’d need to analyze the personal histories of core team members at a slew of successful cleantech start-ups to figure out what they had in common. Second, you’d need to compare these people against their peers at unsuccessful companies in the same domains, to learn whether the winning teams differed from the losing ones.

Taking up the challenge was Josh Rogers – then a student at Tufts’ Fletcher School of Law and Diplomacy – who interned with me and conducted this research for his master’s thesis. Josh went about it like this:

  • First, he established a set of 27 winners – VC-backed cleantech start-ups that had either gone public on a major exchange since 2000 or filed an outstanding S-1 at the time of the analysis, and for which we could build fine-grained histories of the executive team. Examples: Tesla, Color Kinetics, Silver Spring.
  • Second, he assembled a set of matched-pair companies that were in the same industries as the winners and were founded at about the same time, but which unambiguously failed: They either went bankrupt or sold in a fire sale. We would have liked to have had a counterpart for every winner, but because so few companies have tanked completely instead of limping forward, we were limited to ten matches. Examples: Solyndra, GreenFuel, WebGen.
  • Then he collected exhaustive data about the backgrounds of every key executive in each of these 37 businesses – 122 people total – including their age, education, country of origin, past work experience, and a host of other variables (39 altogether).

When Josh began his work, we joked that maybe he’d crack a hidden code: Perhaps I’d hear “Well, Matthew, at all the winning start-ups the CEOs were in their 40s and joined from large companies, while the CTOs hailed from the following five universities.” If so, I could simply ignore all the other business plans I get and focus on the ones that matched the template. Hey, a man can dream, right?

That didn’t happen.

In fact, when we looked at the winners, we found that nothing at all seemed to correlate with success. Founding team members’ ages were all over the map, from Genomatica CEO Chris Schilling (26 years old at company founding) to First Solar impresario Harold McMaster (an octogenarian at 83):

No variety of undergraduate education dominated (although Ivy League degree-holders should perhaps beware):

Among graduate degree-holders, no university stood out. In fact, across the 51 successful team members with advanced technical degrees, 39 universities were represented with only three appearing more than twice (MIT, U. Illinois, and CMU):

And so on. In fact, the only interesting correlation we found was that team members at winning companies tended to be industry outsiders: A mere 28% of them had direct work experience in their start-up’s industry. However, this attribute didn’t predict success because it was the same for our sample of failed companies too (where 26% of execs had prior direct work experience).

At this point, we changed our approach. Perhaps we were asking the wrong questions? Instead of studying the individuals, Josh began looking at the relationships between them. It’s here that we found the trends hiding in plain sight:

Winning teams were complete at company founding. Of the 88 key executives profiled in the 27 successful companies, 74% were present at founding and another 9% joined during the first year. Only one out of six joined after that.

CEOs changed rarely. MBA orthodoxy holds that different stages of a company’s life require different leadership skills, so the CEO should be swapped out as companies develop. Our data didn’t support that. Eleven out of 27 successful companies had a CEO at founding who stayed through the IPO or S-1 filing; another eight were founded without a CEO, but recruited one (usually in the first year) who stayed for the long haul. Only eight winning companies changed CEOs, with only one clearly hostile transition (namely Elon Musk’s takeover at Tesla).

Successful founding teams had strong pre-existing relationships. At 74% of successful companies, at least two of the founding team members had strong relationships before the company was formed – either from working together in past lives (e.g. the four Color Kinetics co-founders, who shared lab space at CMU) or knowing one another well outside of work (e.g. Solazyme’s CEO and CTO, who became close friends as freshmen at Emory).

Winning start-ups included the accomplished core scientist who invented the technology as part of the founding team. Two-thirds of the winning companies exhibited this trait – think Frances Arnold at Gevo or Yet-Ming Chiang at A123Systems. I frequently see start-ups out of universities where the key technologist declines to join the founding team, choosing to remain in academia instead and consult with the company at most; this behavior doesn’t seem to correlate with success.

When Josh examined our matched-pair set of failed companies, they exhibited the opposite trends:

  • Six out of 10 failed companies replaced their CEOs along the way (versus three out of ten).
  • Only half had strong pre-existing relationships (versus three out of four).
  • Only three out of ten had the accomplished core scientist as part of the founding team (versus two out of three).

The conclusion: Great founders hail from every age, background, and school. What differentiates winning teams is their relationships. Successful cleantech companies tend to be bands of brothers and sisters – including the core inventor – that come together on their own, form a complete team, and have a leader fit for the long haul. In contrast, here’s the recipe for a failure: Find an interesting technology, assemble a team of competent people around it who didn’t previously know one another, and don’t worry about bringing the original inventor along.

I don’t want to present false absolutism here: There’s a great deal of subjectivity involved, the sample sizes are small, and the errors bars are wide. But these trends whacked me over the head hard enough that they changed the way I look at energy and environmental start-ups. It’s the team – and relationships make the team.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Bright Future for the Marginal Megawatt

By Matthew Nordan

tl;dr: Life is about to get a lot better for demand response and energy efficiency companies.

One of the challenges of venture capital is that you invest in companies now based on what you know now, but the world may look very different by the time the company exits (i.e., when it’s bought or goes public).

When people talk about this, they usually cite the investment bets that look dumb in retrospect – where investors deployed capital at a time of heady expectations and woke up to cold reality later on. (Amidst dot-com hysteria, otherwise-smart people could envision their morning coffee delivered by Kozmo and paid for with Flooz; afterward, not so much.)

However, one can also make the opposite blunder: Deciding not to place bets in a downer environment, and then missing the opportunity to reap returns when things look up.

This is the milieu that demand response and energy efficiency start-ups face today.

Whether they are reducing electricity demand at peak times (Enernoc, Gridium), deploying energy-efficient retrofits (NextStep Living, Ameresco), or doing high-tech real-time stuff to balance the grid (Enbala, CUE), these companies all have one thing in common. They traffic in what I call marginal megawatts – the MW at the very top of the load curve that determine whether the peaker plant gets turned on or whether a new transmission line must be built. The demand response players do this by clipping peaks while the energy efficiency ones do it by dropping the baseline, but they deliver a similar net result. (You could add grid-scale energy storage to this grouping if you wanted to.)

Such companies are poorly valued today. Public stocks tell the tale – for example, as I write this, Enernoc, Ameresco, and PowerSecure are all trading at less than 1x sales and 12x EBITDA. (For those of you who don’t often think about valuations: That’s bad for a growth company.)

This situation is about to change.

What’s the value of a marginal megawatt? In my mind, it should be proportional to two things – 1) the cost to deliver that same MW from conventional generation resources, and 2) the amount of free capacity that’s available to do the generating. Both are hitting inflection points right now.

First, let’s take the marginal cost per MW. For this analysis, let’s consider the market for “frequency regulation,” a horrible misnomer of utility-speak that means “injecting or removing power on the grid over fine time scales to balance supply and demand.” (The name comes from the fact that imbalances cause the grid to deviate from its 60 Hz AC frequency.) Frequency regulation is traded in open marketplaces on a $/MW/hr basis, and its price is probably the purest measure of a marginal megawatt.

As it turns out, the price of frequency regulation correlates very closely with the price of natural gas, because gas plants are usually the market price-setters. See the chart below, which plots the clearing price for frequency regulation (in the United States’ biggest electricity market, the 13-state PJM region) against the price of natural gas (as measured at the Henry Hub distribution center). The r2 on this is 0.80, meaning that natural gas accounts for 80% of the variance in frequency regulation price:

Natural gas prices started plummeting in 2008 due to the hydrofracking revolution and reached a 12-year low of $1.82/MMBtu this past April. As that price was below most producers’ breakeven levels, many folks speculated that drilling only continued because the exploration companies would lose their land leases if they didn’t keep making holes. Since then, new drilling in gas plays has cratered and the price has started climbing back up – it’s at $3.15 as of this writing, and the futures market has it north of $4 by the end of next year.

As the price of natural gas rises, so will the value of marginal megawatts. And there’s reason to believe that the price will increase sharply beyond 2013 if U.S. natural gas starts getting used in new ways – like being exported. Export applications currently filed at the DOE would ship out 16 billion cubic feet per day, which is two-thirds of current U.S. shale gas production!

So higher gas price = more valuable marginal megawatts. Now let’s look at generating capacity.

As goes GDP, so goes electricity demand. When U.S. GDP peaked in 2007, so did our electricity consumption. And when the economy tanked, electricity consumption fell. 2012 should be the first year that these indicators exceed their 2007 levels.

When there’s idle generating capacity around, the companies that own it get hammered. Consider independent power producers, the companies that operate conventional power plants. Their share prices closely track total electricity generation, which in turn tracks GDP – all of which dropped sharply after 2007:

So do I need to write this next paragraph? Only now is electricity demand getting back to its 2007 peak. Doubtless there were new plants getting built five years ago which were completed but unused, so excess capacity will likely persist for a couple more years. But, inexorably, that capacity will get mopped up as GDP rises and electricity demand grows with it, and sooner or later we’ll find ourselves bumping into a new ceiling. Just as predictably, the value of companies that resolve this supply/demand imbalance – those that deliver marginal megawatts – will jump. Note that when Enernoc went public right before the 2007 electricity demand peak, it did so at 20x the previous year’s revenues. It’s now trading at 0.6x. I’ll bet that looks really different in, say, 2016.

The kicker: Demand response and energy efficiency companies will slaughter conventional generators on cost. A new fossil generator costs $1 million per MW in capex, plus or minus, and requires fuel and transmission on top of that. Setting a big user of electricity up to curtail its demand by 1 MW costs maybe $50k – and that’s it. As we climb to a new electricity peak, generators will lose the battle for the marginal megawatt.

So whether your start-up is trimming peaks, lowering baselines, or synchronizing supply and demand, take heart. It’s been a long, hard five years. But a brighter day is just around the corner.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Why The Supreme Court Decision On Health Care Reform Doesn’t Really Matter

By bkocher2013

This post first appeared in Forbes.

When the Supreme Court announces its decision on the constitutionality of the Affordable Care Act (ACA) it will kick off a storm of analysis around the political impact.  An outcome that upholds the law as a whole will be seen as a positive for the President whereas a decision that strikes down some or all of law down could be a boon for the Republicans and Governor Romney’s campaign.  The middle scenario results in parts of the law such as the individual mandate to being struck down, with the rest upheld.  While it is impossible to predict the outcome, it is a safe prediction that the decision will be followed by a lot of punditry on cable news channels.

This debate misses one essential fact:  the Supreme Court’s decision does not matter as much as political pundits think.  The American healthcare system is in the midst of intense experimentation and change that cannot, and will not be stalled by the whims of the judicial system.  The major forces behind the changes in our healthcare system—rising costs, an older, sicker population and technological innovation—show no signs of abating and do not depend on Federal legislation.  They are fuelled by private sector demand, not policy preferences in Washington.  So, while coverage may not come as soon as hoped for the uninsured, the systemic efforts to make our health system more affordable and higher quality will continue.

Some changes, like the widespread replacement of paper records by electronic medical records, are visible from a patient’s-eye-view; others, such as changes in the relationship between insurers and hospitals are not immediately visible.  These less-visible changes are potentially even more profound.

Some of the most important changes involve the financing of health care.  In the traditional way of doing business, insurers paid hospitals and physicians under a “fee for service” system; essentially they are paid for the quantity of medical services provided, regardless of the outcome.  Unfortunately, fee-for-service is inflationary, giving hospitals a perverse incentive to focus on driving up volumes and activity without regard for cost, value, or achieving better outcomes.  Even if a patient gets the wrong care or inadequate care, the hospital is paid to treat complications or readmissions rather than to prevent them.  Nobody—especially not the patient—wins and all of us pay.

HMOs grew in the 1990s in an attempt to fix these incentives by combining the insurer and the care provider, but patients hated their HMOs because they restricted choice without proving that they were delivering better care.  The current wave of experimentation, in flavors such as bundled payments for “episodes” of care (e.g., paying for everything associated with a diagnosis, procedure, and treatment in a single payment), accountable care organizations (ACOs), and new penalties for excessive hospital readmissions are all tactics to try to fix these misaligned incentives and aim to cut costs and improve care for the patient, while preserving choice.

Further proof that these reforms have staying power well beyond HMOs is the recent pledge by United Healthcare, the nation’s largest private insurer, to follow many of the regulations included in the ACA even if the law is repealed.  Other insurers, includingAetna and Humana quickly made similar pledges.

Even Congress’ current gridlock cannot completely choke off policy innovation at the federal level.  The Center of Medicare and Medicaid Services has used its authority to experiment with a variety of demonstration projects that do not require congressional approval.  One example is the current project to award bonus payments to Medicare Advantage plans which score well on the program’s Star rating system for providing better and more cost effective care.  While a repeal of the ACA would remove some of the agency’s authority, it would not prevent the agency from experimenting with a variety of demonstration projects that could lead to improved payment models.

Not to be left out—a great deal of experimentation is happening at the state level.  One of the cornerstones of the ACA is the requirement that each state create a “health insurance exchange” which serves as an online marketplace for individuals to purchase health insurance.  Eighteen states representing 42% of the US population are in the process or have already laid the legislative groundwork to establish exchanges.  Many of these states, including California, have indicated that they plan to continue with the exchanges regardless of the Supreme Court’s decision.

Patient attitudes are changing too.  Polling conducted by the West Wireless Health Institute shows that patients are increasingly worried about the costs of their care and taking steps to control it.  This means that they are more likely to scrutinize their health insurance benefits and opt for high-deductible health plans (HDHP).  While attitudes can be slow to change, the inexorable force of the millions of patients choosing health plans that reward shopping for lower cost and better quality care will not be stymied by the Supreme Court’s ruling.

So while it is uncertain how the Supreme Court will rule, or how the ruling will impact November’s results, it is a certainty that our health system will continue on the path of more affordability, more integrated care, and more focus on patients because consumers are demanding these changes. These unassailable trends are led by the private sector and, if anything, will accelerate regardless of the Supreme Court’s views of the meaning of the Commerce Clause.  Moreover, since the private sector contributes virtually all the profitability in the health system, it is a force far more powerful than politicians.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Network Effects are Magical

By brianascher

ImageNetwork Effects are magical.  They are the pixie dust that makes certain Information Technology businesses, especially on the Internet, into juggernauts.  They can be found in both consumer and enterprise companies.  Network Effects are special because they:

  1. Provide  logarithmic growth and value creation potential
  2. Erect barriers to entry to thwart would-be competitors
  3. Can create “Winner Take All” market opportunities

Network Effects are like a flywheel–the faster you spin it the more momentum you generate and enjoy.  But not all markets lend themselves to Network Effects.  They are not the same as Economies of Scale where “bigger is better.”  To be certain, Economies of Scale can give strong competitive advantage and defensibility to the first to get really big (or Minimum Efficient Scale as the economists call it.)  For example, SAP and Oracle benefit from having massive revenue bases which enable them to employ armies of engineers who develop rich feature sets and also to hire huge sales forces.  However large these companies are today, though, their growth rates, especially in their early years, were far more modest compared to those Network Effect companies whose growth resembled a curved ramp off of which they launched into the stratosphere.

There are four main types of Network Effects:

  1. Classic Networks, in which the value of a product or service increases exponentially with the number of others using it.  Communications networks like telephones, fax, Instant Messaging, texting, email, and Skype are all examples.  Metcalfe’s Law captured this as a simple equation where the Value of a network = N², where N is the number of nodes.  Typically, each node in a classic network is similar to each other and possesses both send and receive capabilities.  This will become clear juxtaposed against the other network effects below where there are different types of nodes.  Other examples of classic Networks are social networks (eg Facebook) and payments (eg PayPal).
  2. Marketplaces, where aggregations of buyers and sellers attract each other.  Lots of sellers means variety, competition, and price pressure, which all serve to attract more customers.  And because the customers flock, more sellers are enticed to participate in the marketplace.  eBay, stock exchanges, and advertising networks are all examples.  One nuance of marketplaces, however, is they differ in terms of the scale required for acceptable liquidity.  For example, ad networks can achieve sufficient reach and liquidity at relatively low levels which is why you see thousands of online ad networks, where they each exhibit network effects but not in a winner take all fashion.  Stock exchanges and payment networks require far greater scale for network effects to operate, which is why you see much greater concentration in these industries.
  3. Big Data Learning Loops.  “Big Data” is all the rage in techland, but just having gobs of data is not necessarily a Network Effect, nor any sort of competitive advantage per se.  What you really need is unique data and algorithms that process that data into insights which then lead to decisions and actions.  A flywheel effect comes when you get a critical mass of data that you mine for insights; pump that value back in to your product or service; which attracts more users which get you more data.  And so on.   Venrock portfolio company Inrix is a good example, where they mine GPS data points to derive automotive traffic flow data.  The more commercial fleets, mobile app users, and car companies they can get data from, the better their traffic analysis becomes, which gets them more users and hence more data.  They turn data into an accuracy advantage that earns them the right to get even more data.
  4. Platforms are a very special and powerful form of network effects.  In Information Technology, a true “platform” is where other developers build technology and businesses on top of your technology and business because you offer them one or more of the following:
    1. Lots of users/customers, and you represent a distribution opportunity for them
    2. Compelling development tools, technology, and (sometimes) advantageous pricing
    3. Monetization opportunities

Example include Operating Systems like Microsoft Windows, Apple App Store, and Amazon Web Services.

Each of these four types of network effects can be extremely powerful on their own.  Yet, even more power is derived when a business can harness multiple types of network effects in synergistic ways.  Google, Apple and Facebook do this for sure, but a less well known example is Venrock portfolio company AppNexus that operates a real-time online advertising exchange and technology platform.  The exchange aggregates advertisers, agencies, publishers and ad networks for marketplace liquidity, but also offers a hosting and technology platform for other AdTech companies and ad networks to augment their own businesses.  And the vast troves of data AppNexus processes every millisecond flows back into the system as optimized and targeted ad serving.

Network Effects are what you want fueling your business.  Sometimes you just need to get clever about discovering and harnessing them.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Meaningful Use Of Health IT Stage 2: The Broader Meaning

By bkocher2013

This post first appeared in the Health Affairs Blog.

On February 24, the Centers for Medicare and Medicaid Services (CMS) and the Office of the National Coordinator for Health Information Technology (ONC) issued the proposed “stage 2” rules for the meaningful use of electronic health records.   Stage 2 unequivocally lays out three bold requirements that are sure to be transformative to the United States healthcare system over time.  First, it standardizes data formats to dramatically simplify how information is both captured and shared across disparate IT systems.  This will make healthcare IT systems truly interoperable, one outcome of which will be greatly expanding patients’ abilities to choose where to receive care.

Second, it is emphatic that patients be able to access and easily download their healthcare records and images for their own use.  This will spawn an industry utilizing this “big data” to provide solutions to patients and providers that help manage care, shop for care, and even invent new models of care delivery. Third, it expands the scope of tracked quality metrics to include specialists and to reflect outcomes as well as care coordination.

Together, these three major requirements will drive the birth of new payment models and incentive structures leading to improved productivity and outcomes.  As a result, Stage 2 will fuel an ecosystem of companies attacking healthcare inefficiencies in ways that are not yet even imagined.

“Stage 2” commences in 2014 for providers who demonstrated stage 1 meaningful use in 2011.  For all others, it begins in year four of meaningfully using an electronic health record.  Providers who meaningfully use electronic health records will receive $44,000 in Medicare incentives over five years.  For those who do not meaningfully use electronic health records, penalties begin in 2015 that grow up to a 5 percent reduction in Medicare reimbursement over five years.

To qualify as a stage 2 “meaningful user” of electronic health records, providers need to comply with, and track, 20 functional metrics and 12 clinical quality measures. (See Exhibit 1 below, click to enlarge)  The functional metrics are very similar to stage 1 albeit with higher performance thresholds in many cases.   In keeping with the theme of stage 2, CMS emphasizes data sharing, patient engagement, and decision support in order to improve clinical quality measures.

Exhibit-1-Stage-23

Rationalizing quality metrics. The clinical quality measures represent a major advance, aligning quality scorecards across HHS’ programs.  A major burden for providers to date has been the checkerboard of clinical quality measures that HHS programs (PQRI, ACO, NCQA-PCMH, CHIPRA) ask providers to report.  Stage 2 aligns all of these programs to satisfy the clinical quality measure reporting requirements. Hopefully, private payors will also adopt these measures for their quality programs since they will be already incorporated into certified electronic health records.  If this occurs, it would radically reduce the complexity and burden of quality reporting for providers, and thereby increase clinicians’ focus on improvement.  Furthermore, by expanding the pool of potential metrics, specialists will be able to select logical sets of measures to track and report with required decision support.

Liberating data. While the mechanics are important, what makes stage 2 transformational is that big data sources and uses – likely across payors — will inevitably emerge.  This will be a byproduct of the requirement that all data be captured using the same standards, the expansion of quality measures to cover a majority of healthcare spending, and the ability for patients to download data.  The proposed rule requires providers to have at least 10 percent of their patients “view, download, or transmit to the third party their health information.”  In a short time, an enormous trove of data will flow outside the electronic health records of physician offices.

Big data offers great potential.  In other sectors, mining large data sets has led to breakthroughs in productivity, consumer experience, and cost structure.  It is also needed to make approaches such as IBM’s Watson super computer practical for healthcare, as the quality of machine learning results depends substantially on the amount of data available.  It will not be long until patient level information is combined with large existing data sets like those being liberated by the Health Data Initiative.  These combinations will generate far more accurate predictive modeling, personalization of care, assessment of quality and value for many more conditions, and help providers better manage population health and risk-based reimbursement approaches.

Big data also enables payment reform.  Stage 2 solves the circular problem of needing big data to support non-fee-for-service payment models, and needing new payment models to stimulate the production of the data.  Stage 2 finally removes the barrier of lack of access to data through the ubiquitous reporting of so many quality measures and downloaded patient records.  The transparency facilitated by far deeper, richer, and more timely and specific information will enable payors, consumers, and employers to pay differently for care.

At a minimum and initially, huge variations in prices will be arbitraged, reaping savings for patients and payors.  Over the longer term, it is likely that many variations of episode-based payments will emerge that are tied to specific improvement in outcomes; geographic markets will expand for patients shopping for care, thereby increasing competition; and new lower-cost delivery models will emerge for lower acuity and chronic conditions.

The need for privacy and security. The big concern is rightfully privacy and security.  Nothing would stifle progress faster then misuse of data.  While the meaningful use program has specific security requirements for providers, it will be important for existing privacy rules and security requirements to be rigorously enforced.   The virtuous cycle of innovation enabled by data liberation depends on trust by patients and providers.  Patients will need confidence that the value they get from contributing their health data to datasets outweighs the risks.  Providers will need similar confidence to encourage their patients to access their data and use the tools that will emerge.  One hopes that a market quickly develops that has a high bar for privacy and security, engages patients in their health and healthcare, fairly represents provider performance, supports shared decision-making, and helps providers achieve clinical goals that increasingly are linked to reimbursement.

While the public comment period will inevitably surface many skirmishes over details, the final rule should inculcate a path towards 1) new reimbursement models that reward outcomes and coordination; 2) massively more data to support patients and providers; and 3) a far more dynamic marketplace.  Providers will be well served to view stage 2 not as a requirement to better use their electronic health records, but as a foreshadowing for how to compete and thrive in a future that is coming sooner then most anticipate or are prepared for: a future where a provider’s ability to deliver reliable outcomes, economic value, and exceptional patient experiences will soon be transparent to peers, competitors, payors, and, most of all, patients.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Klout Dominance: Why We Believe

We are big believers in Klout! I was lucky enough to participate in the Series B financing. Since that time, through the hard work of Joe Fernandez and team, Klout has experienced explosive growth. With that backdrop, I am thrilled to announce Venrock’s participation in the Series C Financing. 

Klout measures consumer influence across social media. As social platforms continue to grow, it becomes increasingly important to have a standard system for identifying and measuring influence. Klout is this global standard.

The Social Media category continues to fragment with new platforms showing explosive growth. These platforms are quickly becoming real media channels with scale.  As with any media channel, businesses need to understand the nature of the channel, the mix and makeup of the audience, who matters in that audience, and how to reach that audience at scale.  In a broad sense, I like to think about Klout as the Nielsen of social media. Klout enables advertisers to determine where and whom to target to help gauge the efficacy of advertising. Any consumer-facing company that uses a CRM product will want Klout to enhance their customer outreach. Any application can use Klout to better understand their consumers by using influence scores and categories.

Klout uses the data it collects across different social media sites to identify influencers and segment them according to influence category.  Externally, consumers have a single Klout score that measures their general influence online, but behind the scenes these users are segmented according to an incredible array of categories.  Klout currently analyzes a variety of sites, including Facebook, Twitter, LinkedIn, FourSquare, YouTube, Instagram, Tumblr, Blogger, Last.fm, Google+ and Flickr, with many more on the way. Advertisers and businesses can access influence data via an API to run targeted campaigns with consumers in different categories of interest. 

The imprimatur Klout has achieved with brands and agencies is remarkable. The company has achieved a high level of recognition and has emerged as the standard for influence. As the web is rebuilt around people rather than pages, Klout has become the next critical layer of the analytics and measurement stack.

Permalink

| Leave a comment  »

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

“Great” is Tough to Pick out of the “Good” Crowd

By Bryan Roberts

(A version of this post also appeared at AllThingsD.)

The oldest adage in start-up’s, for entrepreneurs and VC’s alike, is “the key to success is the quality of the people.”  Markets and innovative approaches are important, but my experience supports this notion unequivocally. I have had the good fortune to be involved from an early stage with several billion dollar companies, and most found success after a material pivot from their original approach – Athenahealth, Ironwood Pharmaceuticals and Sirna Therapeutics to name a few.  “I invest in people” is the start-up ecosystem’s version of motherhood and apple pie, but how do you identify “Great” prospectively?

Whether explicitly or not, everyone has their own answer to this question, and based on the success rates, those answers by and large stink. I don’t have a Magic 8 Ball on the topic, but two things make this the issue I wrestle with most: (1) the often-unpredicted success or failure of “nobodies” or “sure things” respectively, and (2) the outsized rewards for locating great, juxtaposed with the probability of abject failure when settling for good. The A+ entrepreneurs with whom I have partnered have come in unusual packages – simply put, there has been no central casting: a biology post-doc who thought about opening a microbrewery B&B; a large animal veterinarian who went to business school in his late 30’s; an x EMT who was also nephew to the President among others.  The best VC’s seem to show the same diversity of background.

I now focus on these attributes:

  1. Great talent finds a way to win… and is relentlessly driven to do so with a real sense of urgency.  They follow through and complete the task – starting is easy, finishing takes real will.  It is not that they think out of the box, there simply is no box.  They view ambiguity as opportunity, not risk. When things get uncertain is when they really perk up and start to pay attention because that is when real change is possible.  Most of all, they exceed expectations. They bend the space-time continuum in some fashion and their accomplishments are extra ordinary.
  2. Experience is overrated. By and large, the world is changed by the young and the hungry. Experience can be enabling or constraining, but it is not even close to the silver bullet many believe it to be.  If you are seeking a VP marketing or head of sales at a 100+ person company, absolutely look at a resume.  But to find someone with the passion and uniqueness to actually create an early stage venture, you have to spend the time: watch them and see what they do, talk to them and see what they think, ask around and see how respected they are.
  3. Balance exploring/driving with learning/listening. Great people have a very clear grasp of the their vision, while understanding that the world has a lot to teach them. They are humble students of the game, but very confident in their abilities, and never “do what they are told.” They don’t avoid conflict and will always bet on themselves rather than shy away from risk.  They ask questions and argue on facts, balancing their gut with innumerable data streams to get to what they believe is the right answer.
  4. Great people are magnetic. They are not only smart and driven, they attract resources when all the data suggests they should not – whether capital, people or partners – and thereby become larger than just their singular efforts.

While potentially controversial today, I have come to believe that great entrepreneurs and great VC’s are two sides of the same coin.  Both embody these characteristics.  They are maniacally focused on changing the way we live with innovations others thought were not possible. They are passionate about building a great company and put the company before themselves.  No great VC takes solace in having a portfolio when an individual company struggles – like entrepreneurs, this is deeply personal and about so much more than just money.  Their roles are complementary, like looking down opposite ends a telescope, but those different perspectives to a problem can be extraordinarily synergistic.  Great future entrepreneurs can look like great young VC’s, and vice versa – three of my recent investments are stellar companies started by these “crossover” folks.

All venture firms are simultaneously never, and always, looking for team additions.  I believe this is a direct result of how elusive it is to identify those who will be not only smart, passionate, personable and high integrity, but also successful in this ever-changing, ambiguous entrepreneurial world where what worked last time is no recipe for future wins – and more likely charts a path to mediocrity.   In fact, my own difficulties in finding conviction around potential team additions for our firm is what spurred putting these thoughts on paper.

 

Comments   |   Add to Facebook!   |   Tweet it!   |   Digg it!

More thoughts on market sizing

Update: This is simply a market sizing exercise for people building a business in a market that doesn’t exist. It does not reflect my actual thoughts on value. If you notice, I rewinded to 2009 and only explored one business model for clarity.

The New Market

Yesterday we talked about the established market but the market sizing exercise starts to get really interesting when you think about markets that don’t exist.  Let’s take a company like Twitter circa 2009, when there was still a lot of ambiguity around what the size of their market opportunity looked like (some might say this ambiguity still exists today).  The executive team probably had a vague sense that there was going to be some kind of advertising supported model to the business and I’m sure their investor decks contained the requisite ad-supported slide: “$300B in advertising spending in the US and only $25B of it is online!” 

Like most consumer internet companies, the key market sizing question for Twitter is very simple: what is their annual revenue per user at scale? 

Let’s start with a very simple model. Let’s suppose that Twitter will be purely ad supported. The basic market size equation that we’re going to start from is as follows:

Twitter Market Size = (Users) * (number of ads/user) * ($CPM of ads)

We can begin by decomposing the number of users. What does a typical Twitter user look like?  A simple assumption to make is that over the next 3-5 years, the typical Twitter user will be somewhere between 15-34, with a lower diffusion rate in the 35-49 year old category and a very limited diffusion rate above 50.  The Zynga case might make us question some of those assumptions around people over 50 but let’s play it safe.  To begin, let’s start with the following numbers, based on US Census Data from 2000:

·       15 – 34: (79MM people) * (100% possible diffusion) = 79MM users

·       35 – 49: (65MM people) * (50% possible diffusion) = 32.5MM users

·       50+: (76MM people) * (10% possible diffusion) = 7.6MM users

·       Total Potential Twitter Users = 119MM users

There are several factors that will influence this number, including what percentage of people have internet access, socio-economic factors, and general appetite for digital media.  Clearly, this number can be refined. 

From here, we need to think about the number of ads each user will see.  This is particularly tricky with Twitter given that a lot of users are on 3rd party clients where it may be difficult to track ad views and CTR’s and where Twitter may not even be able to serve ads into.  This is a whole other discussion but for purposes of this analysis, let’s assume that everyone goes directly to Twitter.com.  This is where the wheels start to fall off the proverbial VC short bus. Twitter has no idea what the ad units will look like, what is the ideal number of ads served, how much time a user will spend on Twitter.com, or whether ads will work at all.  Oh well...  The analysis must go on.  Let’s assume that our thesis is that Twitter.com will be primarily a source of news distribution.  During the week, most users check a news website once in the morning and once in the afternoon, for an average number of daily visits of twice per day.  On the weekend, let’s assume that an average user won’t check Twitter at all since they’ve got a lot more time on their hands to read magazines, browse their favorite sites, and won’t need the quick-news-fix that Twitter provides.  Given the short format of Twitter, serving 1 ad per visit is not unreasonable.  Putting these assumptions together, let’s look at how many ads an average user will see in a year:

Number of Annual Ads Per User = (1 ad per visit) * (2 visits per day) * (20 visits per month) = 40 ads per month. 

As a sanity check, this feels a bit low. Just browse the web for 20 minutes and count how many banner ads you see.  We’ll want to revise this number upwards later on.

Lastly, what is the average $CPM that Twitter will be able to charge?  At scale, let’s assume that Twitter directly sells out 75% of their inventory at a $5CPM and uses 3rd party ad networks to fill the remainder at a $1CPM after revenue-share to Twitter.  These are reasonable numbers based on average CPM rates across all categories for banner ads, but there is a huge open question of whether Twitter ads will behave like banner ads in terms of branding value, CTR’s, and other metrics.  The ad effectiveness profile could be wildly different, in which case our $5/$1 assumptions could be materially off. 

Now, let’s put all of this together to see what the potential ad-supported annual market size is for Twitter in the US:

Twitter Market Size = (119MM Users) * (40 ads / month * 12 months) * ($5 CPM * 75% of ads) / 1000 (necessary to calculate CPM) + (119MM Users) * (40 ads/month * 12 months) * ($1CPM * 25% of ads) / 1000 (necessary to calculate CPM) = $228,480,000 revenue/year.

As a sanity check, is it reasonable to expect Twitter to capture $228MM of advertising revenue, given that online advertising revenue in the US will hit $50B or so in the next 3-5 years?  Probably. As I noted, there are a number of areas where the analysis can be refined and it is likely that our core thesis of Twitter as a distribution medium of news is too limited.  Beyond that, we haven’t explored a variety of other business models that Twitter could pursue, including subscription, commerce generation, data sales, and so forth.  This is where the really interesting discussion points begin. 

Hope this was helpful!

MC

 

Permalink

| Leave a comment  »

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Some thoughts on market sizing

Over the last few weeks I’ve had some meetings where the topic of market size could have been a bit more rigorously addressed. It’s a hard issue to tackle – particularly when you’re creating a new market – but the topic is critical in every pitch.  There are some occasions where the market size is fairly straightforward.  For example, I’ve looked at a few female focused online fashion companies recently and while I know this is a huge market, it’s still helpful to dive into the issue of sizing for a couple reasons:

1.     Most companies are going after a slice of the market.  The fashion market for 15-44 year old females with household income of $40,000-$80,000 dollars is quite different than the market of all female fashion.  This is an obvious point but I’d say that about 1/3rd of the pitches I see contain market size estimates that include sectors that are outside of the focus of the business.

2.     The market sizing discussion is incredibly helpful in getting to know how you think. Most of the time the intro pitch is the first meaningful interaction between entrepreneur and VC.  I like to think of the market sizing discussion as almost an intellectual discourse between professor (entrepreneur) and student (vc).

With that in mind, I’d like to share a couple different ways that I like to think about market sizing in the consumer internet space.  There are a variety of other ways to think about sizing and a lot has been written on the topic. I’d encourage everyone to spend some time on Google and read up on other opinions.

Established Market, New Product 

Continuing on with the example above, let’s say I’m starting a women’s fashion company that aims to sell scarves online.  Those of you that know me are probably chuckling right now since I’ve been wearing the same 3 scarves for the last few years now and am thoroughly unqualified to run such a business. 

Let’s take a first cut here.

Market Size = (number of females in the US in the target market) * (average number of scarves purchased by females) * (average price point)

The types of scarves that I’m selling will appeal to 15-44 year old females with a household income (HHI) between $40,000-$80,000/year.  The US census data groups people into segments of under 15, 15-24, 25-34, and 35-44.  The data tells me there are 8.7MM females in this category. Not a bad start.  Unfortunately, my instincts tell me that scarf consumption varies dramatically by geography. I’m going to make a simplifying assumption and segment consumers into two groups: California People (which also include people from Arizona, New Mexico, Texas, and other states where scarf consumption is de minimis) and Everyone Else.  Note that I’m a New Yorker, though, I must admit that some of my favorite people are from California! After looking through a map of the US and segmenting different states into warm and cold climates , I’ve decided that 20% of the US population are California People and 80% are Everyone Else. 

Next, I need to determine how many scarves are purchased by the average 15-44 year old female in both of these segments. To do this, I got on the phone and called up 20 friends in each of these groups.  Those of you that did not major in History like I did will likely groan that this is not a statistically valid sample.  I agree.  It’s a start. If you want a statistically valid sample, there are a number of online survey companies that can get you this data fairly cheaply.  My very un-rigorous survey reveals that the California People buy 0.3 scarves/year and Everyone Else buys 1 scarf/year.  Moreover, I got the sense that Everyone Else takes the quality and fabric of their scarves seriously and probably pays more on average per scarf than the California People.  More on this later.

Now we’re on to the final stretch.  What is the average price point of a scarf?  In my informal survey above, I asked my friends for their average price point but most of them didn’t remember and those that did seemed to give me inflated numbers (so snobby!).  To get insights into this question, I went on Amazon, navigated to the women’s clothing section and searched for the keyword “scarf.” Here is the breakdown:

·       Under $25: 3,758 (50%)

·       $25-$50: 1,728 (23%)

·       $50-$100: 1,348 (18%)

·       $100-$200: 524 (7%)

·       $200+: 107 (1%)

·       Total: 7,465

Using this informal technique, let’s assume that the average price point is $25 for Everyone Else and a slightly lower $20 for the California People.  These are a decent ballpark approximation but can obviously be refined further.

Taking the data we’ve gathered, our first cut at a market size for my new company is:

(8.7MM females * 80% Everyone Else) * (1 scarf/year) * ($25 average price point) + (8.7MM females * 20% California People) * (0.3 scarves/year) * ($20 average price point) = $187,050,000

Does this number seem reasonable or is it out of line with reality? As a quick sanity check, the next thing I did was go to the Consumer Expenditure Survey maintained by the Bureau of Labor & Statistics and look at the total amount spent on clothing by US households and the look at what percentage of the total clothing market the $187,050,000 represents. 

Lastly, I asked myself, what is a reasonable amount of the market that I could capture? Fashion is a particularly fragmented market and even if I become the category killer in scarves, it’s unlikely I’d get more than a few percent of the overall market.  Let’s say I can capture 5% of the overall scarf market – an extraordinary number in any fashion related category, then I’d be making around $9.3MM/year given the numbers above. 

Note that this is a quick-and-dirty analysis. An actual analysis should be significantly more rigorous in terms of data quality and layer in more refined assumptions.  For example, my segmentation of California People and Everyone Else, while entertaining, is too simplistic to withstand real scrutiny.  The same goes for my methods of data collection. Tommorow I’ll post some thoughts on how to approach a new market.

 

 

 

Permalink

| Leave a comment  »

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

‪Venrock’s Campise Doesn’t See `Bubble’ in Technology Yet‬‏

I was in San Francisco last week and had fun meeting with lots of companies. I also did a short interview with Bloomberg. Here it is..

Permalink

| Leave a comment  »

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Joining Venrock

It’s a very natural thing to do on your first day of work: fire up your computer, check a bit of email, and read up on the upcoming events of the week.  On this particular day, reading up on the events of the week meant looking through the materials for our upcoming Limited Partnership meeting.  Contrary to popular belief, the “LP meeting” is not one of those shrouded-in-mystery type events; it’s fairly straightforward.  The team presents to the investors on the state of the portfolio, fields questions, and gets to hear from a few select speakers. 

It struck me that the job description of the keynote speaker at the Venrock LP meeting was unusually sizable: “Aneesh Chopra’s job will be to promote technological innovation to help the country meet its goals such as job creation, reducing health care costs, and protecting the homeland.”  Wow.  Kind of puts things in perspective.  Aneesh is the Chief Technology Officer of the United States and was appointed by President Obama in 2009.  Even more importantly, he is the first person to hold this title (and what a great title it is).  Just thinking about this…the ENIAC, which is widely regarded as the first computer ever invented, was built in 1946.  Fast forward 63 years and we now have our first CTO.  There’s a definite thoughtfulness in the selection approach for this role. 

Aneesh covered a wide range of topics in his discussion, but underscoring all the themes that he touched on was the issue of data analysis.  The US government generates unbelievable amounts of data across every category you can imagine: packaged food composition, mining production, reservoir water levels, medical facility ratings, and my personal favorite, the American Time Use Survey. All of this data has been coming online over the past few years and there is still a tremendous amount of data that is not yet accessible.  A number of interesting companies have already begun to use these datasets to gain a powerful advantage.  As @jonathanmendez pointed out, one great example of this is Urban Mapping. I am confident that many more will emerge.

The conversation with Aneesh was inspiring because it brings into focus the reason that I got into venture capital in the first place: to find and invest in great entrepreneurs that are tackling problems of vast importance.  Venrock has a long and rich history of executing on this goal and I’m proud to be working with such an accomplished group of investors.  With such an exciting entrepreneurial community bubbling here in New York ($2.2B of venture money in NY in 2011!), I’m enthusiastic about what this next year will bring.

Permalink

| Leave a comment  »

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

How to Moderate a Panel That Doesn’t Suck

By brianascher

On April 14th I am moderating a panel at the Digital Healthcare Innovation Summit in New York City titled “The Hospital as Production Center:  Holy Grail or Impossible Dream?” [For anyone who wants a discounted registration rate, see the end of this post.]  In an effort not to suck, I’ve put some thought into what makes a great panel.  Like many conference junkies in the tech and finance worlds, I’ve sat through hundreds of panels, been on a bunch, and moderated a few handfuls over the years.  Here’s a list of a dozen suggestions that I plan to implement:

  1. Have at least one colorful character on the panel. Conferences can be a grind, and lots of people find the most value is in the lobby, meeting people.  For those willing to actually sit through your panel you want to entertain them as well as inform them if you expect them to pick their heads up from their smartphones and remember anything from the hour of so they give you of their (partial) attention.  Having at least one spicy rebel on the panel that is willing to share provocative views and mix it up with the other panelists is key.
  2. As the moderator, get your panelists on a call ahead of time to brainstorm and interact with each other. This is your opportunity to figure out if you’ve got the right mix of characters and also form a plan for what you’ll cover and set expectations.  Don’t procrastinate on this.
  3. Know your audience. Conference organizers purposely cast a wide net in their marketing and promotional materials so they can get the best turnout.  Find out for sure who the bulk of the audience is really likely to be.
  4. Send questions ahead of time. Your goal as panel moderator is to make your panelists look brilliant, not to try and stump them so you look like the smartest person on stage.  Give them the questions ahead of time and know who is likely to have the best answers for each of them.
  5. Keep intros brief.  Maybe not at all. Most intros take too long and are pretty boring.  If the conference materials have speaker bios, I personally don’t think there is any need to go into detailed introductions other than to identify who is who.
  6. Know the context of the rest of the conference.  Pay attention and make reference. Planning your topics and questions ahead of time is great, but you want to keep in mind the context of the rest of the conference so there is minimal duplication but appropriate linkages to other topics and speakers, etc.  If prior speakers or panels have covered topics relevant to your panel, make reference to them.  It shows the audience you were not sleeping through the earlier sessions, so maybe they won’t sleep through yours.  J
  7. Use social media to promote, distribute, and even moderate in real time. Twitter, LinkedIn, Facebook, your blog, are all great ways to promote your panel ahead of time.  SlideShare is a great way to distribute PowerPoint or materials afterwards.  Set up a Twitter hashtag to solicit questions ahead of time and from the audience during the event.  I’ll be using #hosprod as the Twitter stream for my panel.  Feel free to send me questions ahead of time, and check for comments during the panel.
  8. Hit the hard deck, dig for details and examples. Give the audience reasons to take notes by getting granular.  Force the panelists to get specific and give real information.
  9. Stir the pot.  Incite a riot. A panel where everyone agrees with every point is boring.  Elicit differing viewpoints and force the discussion to explore the conflicting opinions.  This will likely be the most useful content, as well as the most entertaining.  Avoid chair throwing.
  10. No crop dusting. It can be very monotonous when the moderator goes up and down the row asking each panelist each question.  Pick your respondents strategically and use them for different purposes.  Move on to the next question as soon as the topic has been sufficiently covered, regardless of whether everyone answered.
  11. Engage the audience, but moderate ruthlessly. Audience Q&A can be very useful and fun, but can also attract rambling questions, people shamelessly plugging their own company/viewpoint, or all manner of unexpected divots.  It’s your job to be respectful but firm in keeping the Q&A on track out of respect to the rest of the audience.
  12. Watch the clock. The ultimate respect for your audience is to finish on time.  Even if your panel is rockin’ and everyone is having a great time, you should finish within the allotted timeframe.  If they still want more, they can follow-up with you and the panelists afterwards.

If you are interested in attending the www.digitalhealthcaresummit.com enter the special key code VNRPR  to receive the discounted rate of $695.00.  You can also contact Cathy Fenn of IBF at (516) 765-9005 x 210 to enroll.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Forget Super Bowl Commercials…these web companies know how to create awesome marketing videos.

By brianascher

By the time you read this post, Super Bowl XLV will be over and everyone will be talking about the … commercials.  Why?  Because most of them are entertaining, some are memorable, and the $2.5 million price tags (for air time alone) pique our curiosity.  Why are brands willing to pay so much?  Because it is one of the only ways to reach 100 million consumers simultaneously, and because a great 30 second video ad packs an emotional payload in support of your brand unlike virtually any other form of advertising.

Over the past few years I’ve noticed more and more web companies producing great videos to market their companies, often presenting them front and center on their homepage as the introduction to their company.  A great video overview can really help explain what you do for customers, how you do it, and present your brand in a flattering light.  The best videos go viral and bring you exponential attention and new visitors.   And web videos have never been cheaper to produce (at 1/2000th the cost of a super bowl commercial even a start-up can afford them.)  So, here are five thoughts on what makes a great marketing video for web companies, and a bunch of examples:

Answer WIIFM: A great marketing video should clearly and convincingly articulate a few simple benefits that customers care about.  Mint.com does a terrific job of this, as does Dropbox, both front and center on their homepage.  The Dropbox video is particularly noteworthy because it takes an esoteric concept and uses analogy to demonstrate user benefits everyone can relate to.

Show how it works: A great overview video shows just enough of the product and how it works to lend credibility to the benefit statement.  Word Lens does a terrific job of this for a product that truly needs to be seen to be believed.  A full blown demo would have been less effective than just these short glimpses of the product in action.

Be yourself: Video is such a rich and engaging medium it is perfect for showing the personality of your brand.  It is a great way to set tone and speak to your customers and prospects in an authentic voice.  Flavors.me does a terrific job of this through music and images alone, letting actions speak louder than words in convincing you that they can make your personal homepage look amazing because they do such a killer job of presenting themselves through this video.  Style personified.

Be fun, get remembered: Great marketing videos are fun to watch and somewhat memorable.  You don’t have to be knee slappin’ funny or so hip it hurts, just smile-inside funny will go a long way.  SalesCrunch and SolveMedia both take pretty dry categories (CRM SaaS and AdTech respectively) and rivet their viewers through entertaining use of cartoons and wit.

Be Brief: Even a great marketing video starts to feel long after two minutes.  Shoot for less.  This video from Smartling gets the job done in 38 seconds.  [Disclosure: Smartling is a Venrock investment.]

These are the five characteristics which I think make for a great marketing video for your web company.  If you think there are points I missed, or have other great examples, please comment and add to the list.  If you are the production agency responsible for making any of these videos please take a bow by claiming your work.  I’m sure others will want to contact you.  If you are looking for more of a live action marketing video, SmartShoot and other online videographer marketplaces can help produce custom video for ridiculously low rates.

Thank you to Ward Supplee, David Pakman, Dev Khare, Dan Greenberg, and Arad Rostampour for sharing some ideas for this post.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

The single best financial reporting tool ever

By brianascher

Today I faced a choice.  Should I go out and enjoy the beautiful weather and waves and go for a surf or should I blog about my favorite financial reporting tool?  Seems like a pathetic question for a surfer to ask, or maybe this financial reporting tool is really that great.  I’ll settle for an answer of “both”.

The tool in question is the Waterfall Chart.  It’s a way to compare actual results across time periods (months or quarters usually) against your original Plan of Record, as well as forecasts you made along the way as more information became available.  It packs a ton of information into a concise format, and provides management and Board members quick answers to the following important questions:

1.      How are we doing against plan?  Against what we thought last time we reforecast?

2.      Where are we most likely to end up at the end of the fiscal year?

3.      Are we getting better at predicting our business?

The tool works like this:

Across the top row is your original Plan of Record.  This could be for a financial goal like Revenue or Cash, or an operating goal like headcount or units sold.  Each column is representative of a time period.  I like monthly for most metrics, with sub-totals for quarters and the full fiscal year.  Each row below the plan of record is a reforecast to provide a current working view of where management thinks they will wind up based on all the information available at that time period.  Click the example below which was as of August 15, 2010 to see a sample, or click the link below to download the Excel spreadsheet.

click to enlargeWaterfall Report spreadsheet

Periodic reforecasting does not mean changes to the official Plan of Record against which management measures itself.  Reforecasts should not require days of offsite meetings to reach agreement.  It should be something the CEO, CFO, and functional leaders like the VP Sales or Head of Operations can hammer out in a few hours.  Usually these reforecasts are made monthly, about the time the actual results for the prior month are finalized.  When you have an actual result, say for the month of August, $2,111 in the example above, this goes where the August column and August row intersect.  On that same row to the right of the August actual you will put the new forecasts you are making for the rest of the year (September through December.)  In this fashion, the bottom cells form a downward stair step shape (a shallow waterfall perhaps?) with the actual results cascading from upper left to lower right.  You can get fancy and put the actuals that beat plan in green, and those that missed in red.  You can also add some columns to the right of your last time period to show cumulative totals and year to dates (YTD).  With or without these embellishments you’ve got some really powerful information in an easy to visualize chart.

Two questions an entrepreneur might ask about this tool:

By repeatedly comparing actual to plans and reforecasts, won’t my Board beat me up each month if I miss plan or even worse, miss forecasts I just made? If you are a relatively young company, most Board’s (I hope) understand that planning is a best-efforts exercise not an exact science.  Most Boards will react rationally and cooperatively if you miss your plan, as long as you avoid big surprises.  By giving the Board updated forecasts you decrease the odds of big surprises because the latest and best information is re-factored in to the equation as the year progresses.  They probably won’t let you stop measuring yourself against the Plan of Record, but at least you’ve warned them as to how results are trending month to month and course corrections can be made throughout the year.

Won’t this take a lot of time? Hopefully not a ton, but it does take effort.  However, it should be effort well worth it beyond just making the Board happy, because as a management team you obviously care about metrics like cash on hand, and this should be something you are constantly recalibrating anyway.  The waterfall is the perfect tool to organize and share this information.

Most of my companies using this tool track five to ten key metrics this way.  Typical metrics include:

  • Revenue
  • New bookings
  • Cash on hand
  • Operating expenses
  • Net income
  • Headcount
  • Units sold or new customers acquired
  • Some measure of deployed/live customers (if there is a lag between a sale and a live customer)
  • For internet companies, some measure of the “top of the funnel” such as Unique Visitors or Page Views

Whether or not you agree this is the single greatest financial reporting tool ever, I hope you give it a try and find it useful.  Now I’m going surfing….

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!

Why Are VCs So Scared of Hospitals?

By brianascher

There is much conventional wisdom in venture capital.  One such belief is that hospitals are a really horrible market for tech startups to pursue.  Back in 2002 when we invested in Vocera, an innovative communications system for hospitals (think Star Trek), many other firms had looked at the deal and passed.  Although this was the company’s third round of financing, the company was still pre-revenue and pre-launch, and this was the first round raised subsequent to their strategic shift from a horizontal solution to one vertically focused on hospitals.  Most VCs ran from it.  Following are some of the reasons potential investors gave for hating the hospital market then, most of which persist as concerns, often valid, today:

1.      Hospitals are highly budget constrained

2.      Most hospitals don’t have profits motives and are not subject to the same competitive forces as for-profit businesses

3.      Hospitals are complex political environments with many forces that influence decision making and purchase behavior that seem counter to rational business judgment.  Those who decide, those who approve, those who pay, use, benefit from, can all be different roles in the organization.

4.      Sales cycles are very long, often measured in years.

5.      Hospitals are technology laggards when it comes to adopting information technology.

6.      Hospitals are dominated by large technology vendors such as GE, Cerner and IBM.

There is some truth to each of these, but here’s the counter argument that led us to make a second investment in the hospital market, namely Awarepoint, an indoor GPS system for tracking people and assets in the hospital.

1.      There are lots of hospitals.  Over 5500 in the US alone, and there are little blue signs pointing you to each of them.  Given the annual budgets of your typical hospital, this translates into a very big market.  Vocera now serves over 650 hospitals and more than 450,000 daily users, and is still growing very rapidly, believing they have tapped less than 10% of their core market opportunity.

2.      Hospitals are sticky.  Once your product is adopted, and assuming it works well, they are reluctant to switch you out because solutions get so enmeshed in different processes and systems, and so many employees get used to them.  You can’t screw up, or raise prices dramatically, but you may not have to sing for your supper every time a competitor issues a press release.

3.      Hospitals are willing and able to spend on IT if it is a priority and they see an opportunity for a large return on investment.  This is one of the things helping Awarepoint penetrate the market, and they are not alone. Companies like Allocade , which creates dynamic patient itineraries to improve throughput, are also having success based on the ROI they can deliver.

4.      Because hospitals are underpenetrated by information systems, there is lots of low hanging fruit and relatively basic problems to be solved.  Electronic Medical Records vendors are having a field day, both because of stimulus incentives but because many hospitals, especially the 72% of all community hospitals with under 200 beds, still don’t have this basic form of digitizing their information.  The trend towards Accountable Care Organizations, and the related financial incentives, will require greater clinical integration of care across health care settings (inpatient, ambulatory), greater financial efficiency, and increased transparency and flow of information about the process, costs, and outcomes of health care, all of which will require better healthcare information technology.

5.      Hospitals are similar to each other and willing to serve as references to each other.  Yes, they do compete in some ways, and each has its unique attributes, but you find a higher degree of collegiality and similarity than most industries where competitors hate each other and each may have very different ways of doing their core activities.

There are a few reasons why the hospital market is ripening for startups and the VCs who love them:

1.      Hospitals are feeling financial pressures to run efficiently.  With healthcare reform there will be more patients coming in their door requiring services, while price caps will get tougher.  And there will be financial penalties for things like readmission rates that often correlate to operating inefficiently, and which technology can help prevent.

2.      With the EMR mandates and installations, the Chief Information Officer is now in an elevated position in the organization and even considered a revenue generator.  Many EMR installation projects are leading to ancillary projects and opportunities to automate and digitize other aspects of hospital operations.

3.      New IT paradigms like cloud based services, open data initiatives (thank you Todd Park @ HSS), APIs, and Open Source means that it is less expensive to build and deliver better products into the hospital.

4.      Wireless technologies, and relatively cheap and robust devices like iPhones and iPads, make it easier to reach caregivers on the go, whether nurses at the bedside or Doctors on the golf course.  Companies like AirStrip are getting real-time info to the caregiver wherever they are, and caregivers love it.  Also, WiFi and Zigbee in the hospitals means your equipment and monitors, and even staff, can transmit their info from wherever they are without wires and expensive, disruptive installations.

5.      This current generation of Doctors and are used to technology in their personal lives.  They use email, carry iPhones and Blackberries, shop online, etc.  And the residents entering hospitals today are Digital Natives.  There will be an increasing expectation that hospitals adopt these technologies that most other verticals have embraced.

While we fear the unexpected visit to the hospital as much as anyone, Venrock is looking forward to more investments in companies that serve them with compelling HCIT solutions.

Comments: 8   |   Add to Facebook!   |   Tweet it!   |   Digg it!