I spent yesterday at the eMerge conference in Miami. While there was a nice list of speakers, I spent most of the day talking to the early stage companies exhibiting and catching up with people in the entrepreneurship community in Miami.
My takeaways from the conference:
About ten of the companies exhibiting are working in the HR (human resources) space with all kinds of software and platform offerings. Not a space that interests me but apparently very popular.
Despite the cluster emerging in early stage medical devices and healthcare in SFL, I saw very few such companies exhibiting.
The portfolio companies of Medina Capital were the best looking collection of companies, with all of these companies reminding us what "high tech" means. Talking to these folks reminded me of discussions I used to have with people at MIT Media Lab.
Several good looking companies were bootstrapping with no capital raising desired. They were characterized by a step by step caution in developing the business that is somewhat refreshing.
Marginal Revolution has an interesting post today, "The Demand for R&D is Increasing". In it they discuss the potential market for cancer drugs in China, which they estimate is 8X larger than the U.S. Based on this potential market size, the Chinese government is funding pharmaceutical research for the first time.
The takeaways:
Clayton Christensen was correct when he stated that the market size is determined by the number of unserved customers.
China, India, Russia or any other large country should offer incentives to move pharmaceutical research to their country, assuming development costs would be a fraction of the cost in the U.S. (The post estimates that development cost in China is 10 percent of the cost in the U.S.). Such an approach would likely lower healthcare costs for the government, encourage a pharma industrial cluster in the country and raise research standards in local universities, to name a few benefits.
Pharmaceutical prices in the U.S. are outrageously high given the number of worldwide patients for any given drug at an "affordable" price. Given the tiny marginal cost to produce an additional pharmaceutical unit, liability cost cannot explain the high prices given that R&D costs are spread over large unit volumes.
Since 2012 I have posted about 40 articles on design thinking.
Yesterday I met with the Founders of Design Thinking Miami, Mariana Rego and Jessica Do. These ladies are evangelizing design thinking through a series of practical, hands on workshops that they do regularly. Mariana first studied design thinking as an engineering student at the University of Miami.
Mariana is also the Miami coordinator for Launch Code, a program to develop qualified programmers and place them in jobs at local companies. Mariana reports that 70 companies have signed up in Miami to take interns as the first step toward permanent positions.
Yesterday I also visited Entopsis at the Hialeah Technology Center. Entopsis is applying artificial intelligence to medical diagnostics in a very interesting new way. The Hialeah facility is a very nice location if you need a lab or light manufacturing space for an early stage company and the rents are comparatively low. Also, very convenient to the airport.
Innovation has become such a buzzword that I almost do not want to write this post.
Daniel Isenberg at Babson has a quote that is helpful:
"The true enemy of growth for firms is stagnation, and the reality is that most small businesses are stagnant....
One way to combat stagnation is through innovation. What many do not realize is that there are at least three types of innovation, that each type has a specific use and that different levels of management have responsibility for each one.
Grounded Ideas provide the incremental improvements that bring about best practices and excellence in operations. These responsibilities can be delegated to managers.
Blue Sky Ideas are insights that bring about revenue growth through tactical and strategic insights. They are a C-level responsibility.
Spaced Out Ideas are insights that create new business concepts by re-framing a problem. They are a C-level responsibility, but after you have found product/market fit the C-level execs are better served to focus on blue sky ideas. (Christensen's disruptive innovation is in this category.)
More of the over 20 posts on innovation and disruptive innovation:
Perhaps because Google won a big contract to provide Office apps to PwC (the large international accounting firm), they have recently launched a PR campaign all over the web for "Google Apps for Work". An example of the PR is this Silicon Alley Insider story, "Google shares its plan to nab 80% of Microsoft's Office business". A sentence in the article caught my attention:
"Google is constantly looking at how people are using Apps and trying to entice them to use it more."
Google is renown for using data in its product analysis, which perhaps makes the above quote mundane. However, I think it points out the problem with Google's approach to software. Data only describes current usage. If we examine saving a file in Google Docs, Google sees it saved online, perhaps Google sees it downloaded to the computing device...but Google never sees it then saved in Dropbox. You might ask why someone would prepare a document in Google Docs and save it in Dropbox. There are probably ten good reasons, none of which show up in Google's user data.
Microsoft recently launched a new Outlook app for iOS 8, which I tweeted :
Nine days later, I think that Outlook is probably the best mail iPhone app ever...and it includes a good calendar app. If the Outlook calendar app had appointment multiple alerts I would stop using a calendar app. What I particularly like about the app is the seamless integration with Dropbox and Google Drive, both of which I use daily. (Dropbox holds the docs I produce and Google Drive holds third party documents.) It also seamlessly integrates Gmail and other mail providers.
I am not a Microsoft fan boy. For many years I competed with them and I intentionally avoided their products. The only nice thing I could say about Microsoft was to my entrepreneurship students--"Microsoft is an excellent example of monopoly, which economists tout as an attractive business model". With the passing of time I have mellowed on Microsoft. (A tweet rather than a full blog post shows that I am still conflicted.) I tried the new iOS Outlook partly because in 25 years of using email, I have never found an email app on any device that I really liked until the new Outlook. However, let's get back to Google.
How was Microsoft able to develop such a great new mail app, while Google is still constrained by its data. The answer is quite simple. Most of the design and code for the new Outlook came through an acquisition, where the previous company was not constrained by the corporate mindset of a Google or Microsoft or F500.
I imagine when the Microsoft execs saw the other company's mail app, they just thought OMG and quickly bought the company. I commend Microsoft for admitting somebody understood the customer better and did a beautiful design. I wish Google would make some app acquisitions, get a better understanding of the user and perhaps learn more about UI design. BTW I use a lot more apps from Google than Microsoft, but I hope that Microsoft's new acquisition has some new ideas for other apps. Today I am less hopeful about Google.
Historically medical diagnosis has been in simplest terms largely a combination of observation, imaging and tests of blood and urine. The underlying logic of such an approach is the checklist. If certain boxes are checked you have bronchitis and one or two more boxes congestive heart failure. This type of approach does not require higher order thinking such as pattern recognition or metaphor, which is why it is used for routine things like pre-flight checks.
This type of approach has not changed in hundreds of years but the machines to gather data get better and better. No one has thought about how to use pattern recognition to do medical diagnosis. However, with advances in computing and AI, perhaps we should. I saw a pattern recognition-based medical diagnostic prototype this morning. The premise is that a particular disease has a particular color pattern when patient blood is put on a non-chemical proprietary slide. The disease pattern can only be identified given all the color permutations using pattern recognition and AI. The implications are staggering if we just consider cost and investment savings and timing and speed of diagnosis. Innovation perhaps at the scale of the transistor or the computer.
No one disputes the value of good doctors and a lot of money is invested each year on medical research and development. Perhaps equally beneficial will be the exploration of new diagnostic methods using AI in medicine.
In the early 1980s I invested in one of the earliest online retailers, Comp-U-Card. Comp-U-Card succeeded and went public by understanding that customers were only willing to buy hard goods, such as autos, TVs, air conditioners, etc, and would not buy fashion items and accessories. In the early 1990s shoppers grew comfortable with buying fashion online. Today we buy everything online, in part thanks to the success of Amazon.
Many have published stories recently about Amazon pulling its diaper product, including Venture Beathere. Venture Beat also lists other product or merchandising failures, such as a payment wallet and a phone. Everyone talks about these failures as if Amazon was a retailer. I think that is the wrong way to think about Amazon. Amazon is an exchange operator that facilitates the widest range of suppliers and customers entering into transactions. Amazon's expertise is in the infrastructure to support the exchange, which perhaps explains why they have been so successful providing a non-retail service like cloud services.
When Amazon starts to think like a retailer and introduce private label products, such as diapers, they move far away fom their expertise in IT infrastructure and fail. This logic would also explain why wallets and phones were not successful. Retailers succeed in part by selecting merchandise for their target customers. Exchanges succeed in part because their infrastructure supports the needs of the exchange users. While some expertise is needed at a product level, many exchanges tend not to be discriminating. This is the case with Amazon's strategy to sell everything, from consumer products to boat parts to building supplies. Think of Amazon as an exchange and not a retailer.
There was much discussion this weekend about Jerry Neumann's article, "Heat Death: Venture Capital in the 1980s", which chronicles the evolution of the venture capital industry from its start in the 1960s.
One interesting part of the article was how the industry learned about the difference between market risk and technology risk. Market risk is "will people buy it" and technology risk is "will it work". In the current period, VCs generally take market risk and avoid technology risk.
However, as I thought about this dichotomy, the logic of "do something better" kept coming back to me. This approach minimizes both risks or at least frequently does, by taking an existing technology and customer base and offering something better. Google and Apple computers would be examples where the state-of-the art was advanced a bit but there was little breakthrough in terms of technology or customer base. (Amazon might have been an example of market risk and Akamai might have been an example of technology risk when they launched.)
Perhaps one is well served to understand where the business concept is on the continuum of both market and technology risk and to realize that great opportunities may exist with small changes in the market or technology risk factor.
HBS Working Knowledge has an interesting new working paper by Gerald Carlino and William R. Kerr, "Agglomeration and Innovation". The paper discusses the academic literature and the authors' views on the factors that explain the geographic concentration of innovation. Not surprisingly, innovation concentrates in metropolitan areas. Innovation is defined in a classic way as "invention that is commercialized", which in the vernacular would be described as entrepreneurship. Other findings from the article are quoted below:
The benefits from local public subsidies for basic research may not stimulate growth in targeted communities, except for creating a few jobs for scientists and engineers.
They relate co-agglomeration levels [of innovation] to the extent to which industry pairs share goods, workers, and knowledge. They find evidence for all three mechanisms, with knowledge spillovers again the most localized.
For all industries, the localization effects of being near similar businesses decay rapidly with distance within cities—the positive localization effect from being within one mile of another company in one’s own industry is at least ten times greater than the positive effect realized when locating two to five miles away from said company.
While most thoughts of innovation clusters today naturally begin with Silicon Valley, it is important to recall that innovation clusters do move over time.
On the relationship between agglomeration and innovation… What is better established is the development and sharing of specialized business services. This has been especially true with the case of entrepreneurial finance (e.g., angels, VC). Traditional sources of financing, such as bank loans, may be unavailable to innovative start-ups due to their high risk, large financing requirements, and asymmetric information, especially in high-tech industries (Gompers and Lerner, 2001).
The strong concentration of the commercialization of innovation... to the need for specialized business services (e.g., firms specializing in market research and product testing, specialized patent lawyers, and the availability of financing) and similar infrastructure.
Knowledge spillovers are geographically concentrated
There is general empirical evidence that R&D at local universities is important for firms’ innovative activity. Audretsch and Feldman (1996) and Anselin et al. (1997) find localized knowledge spillovers from university R&D to commercial innovation by private firms
Setting aside the academic speak, innovation and entrepreneurship benefit from:
Like minded people very nearby
Support by research universities, and
Support services nearby, most notably capital.
The article should be required reading for politicians. Most government support for entrepreneurship appears to be mis-directed or wasted.
Note: While Miami has made progress in developing the entrepreneurial community, certain theoretical requirements still need to be further addressed.
Beta Boston has a nice story, "I’m a physician, and I saw the future of medicine at CES", about a doctor's visit to this year's CES show in Las Vegas. The doctor enthusiastically describes several devices that will improve healthcare. Right after reading the article I received a phone call. Transcript follows.
(Names are changed to protect the ....)
Caller: This is Dr. Bozo's office calling to confirm an appointment tomorrow with Dr. ...
Me: I responded to the text message to confirm the appointment; why are you calling me?
Caller: No one monitors the text messaging system for responses.
Me: Why do you send them out and waste my time?
Caller: The system sends the texts out and people don't all respond so we call everybody
The doctors may have great devices to use, but I am not so confident they will use the equipment in the most effective way.
I bought my first cellphone in 1984, the year the "Motorola brick" was released (see photo above). I might still be using it, but there was no roaming in Asia back then. So I used a different, smaller phone in Asia and updated it every time my secretary told me the phone was too old. (She explained very slowly that my image was affected by my phone choice.) Over the years I have found several devices noteworthy:
Palm Pilot
Blackberry with voice
Samsung Note
The first two devices I bought before they were available. I really liked the Note at the CES show in 2011, but the OS and apps were clunky so I did not buy it. I bought an iPhone 4 instead (and retired a version of the Blackberry phone that I had carried since 2002).
In October this year I replaced the aging iPhone 4 with an iPhone 6+. I knew the size would not be a problem because I had spent time with the Note. The phone's best feature is the battery life. I get two days of usage before I need to re-charge. I also like the larger icons on the home page(s), which makes it easier to hit the one you want.
There is one other benefit to the 6+ phone. Since I bought the phone I have not used my iPad once. Not once. Reading on the 6+ is very nice and that was my primary use of the iPad (I don't watch movies on computers.) Using the 6+ means I can carry one less device and charger around. If I could attach a keyboard to my 6+, I would not need a laptop (except maybe when doing spreadsheets).
In summary, the 6+ is a great improvement in the iPhone product line.
I saw someone today who attended one of my workshops a few years ago. At the end of every workshop I show the link to this blog, some information about my consulting practice and a link to a public file. Three years ago I shared a public file from Evernote that includes all my clipped articles on entrepreneurship (789). That would be articles from the over 100 blogs I read every day. The link to the file is https://www.evernote.com/pub/rhhfla/aaclass.
In the last two weeks Evernote released a new feature called "related notes". If you include my shared file in your Evernote, then every time you add a note or search for one of your notes any of my notes on a similar subject will also appear. That's pretty cool.
"While other models have achieved comparable accuracy rates, they were only designed to work at a single point in time with a single set of nine justices. Our model has proven consistently accurate at predicting six decades of behavior of thirty Justices appointed by thirteen Presidents. It works for the Roberts Court as well as it does for the Rehnquist, Burger, and Warren Courts. It works for Scalia, Thomas, and Alito as well as it does for Douglas, Brennan, and Marshall. Plus, we can predict Harlan, Powell, O’Connor, and Kennedy."
What the model can do is to determine with 70% accuracy how former Chief Justice Earl Warren (1953-1969) would have voted on a recent Supremem Court Decision such as "AMERICAN BROADCASTING COS., INC., ET AL. v. AEREO, INC., FKA BAMBOOM LABS, INC. "
So here is what we could do. No longer does the President nominate and the Senate confirm Justices and Chief Justices of the Supreme Court. Instead we have an election where citizens can pick any of the 30 Justices in Blackman's study for the eight Justice positions and any former Chief Justice could be picked for Chief Justice. Then for a new Supreme Court case we run the algorithm against the Justices picked by the people. One benefit is quicker decisions by the Supreme Court. Not sure we would get opinions but at least the decisions would be made. (No further updating of the algorithm.)
The fun would be in the campaigning for mostly dead justices to serve on the new Supreme Court. I suppose that someone like the ACLU or Heritage Foundation would propose their slate of extremist liberal or right wing Justices and nobody would advocate for a collection of centrist Justices. And there in lies the problem with politics today. Most advocates support positions at the edges of the political spectrum and nobody represents most of us who are liberal on some issues, conservative on other things and "don't care" about a third group of issues.
Vivek Wadhwa's Washington Post op-ed piece, "We’re heading into a jobless future, no matter what the government does", has gotten a lot of pick up on the Internet. Wadhwa is a professor at Stanford with a distinguished academic career. The article basically discusses the dramatic decline in employment opportunities due to technologies such as AI, automation and robots. I particularly like the joke about a factory:
“The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.” Carl Bass, CEO Autodesk
Wadhwa makes a more serious point when he discusses the role of government in addressing the lack of future jobs. In the industrial age government could manage policy to create enough work for people to provide for their families and maintain self-respect. Wadhwa believes that government can no longer manage policy to create sufficient jobs. The current efficiency of production and the expected increase in productivity will result in government policy being ineffective to create new jobs faster than existing jobs are eliminated.
If Wadhwa is correct, which I think he is, then what is the role of government. If government cannot manage the economy to satisfy individual economic needs then what role is left for government. This is perhaps the bigger question raised by Wadhwa. Why do we need a government with 2.7 million government employees, excluding the military, if the government cannot satisfy the most fundamental economic well being of people. Maybe government should be re-thought.
Of course, many, including Hayek and Wadhwa, have said that government cannot manage complex problems. Doubtful government will redefine its role and equally unlikely government will develop economic policies that counterbalance the natural job loss from new technologies.
I spend quite a bit of time considering the future. This practice started in Indonesia where I had to consider the future in order to mitigate risk. Since Indonesia I have tried to predict the future in order to better understand technology. This thinking has led me as well to take a serious interest in the nature of the customer experience and how it will evolve. All of this thinking about the future hopefully also has some positive impact on my teaching of entrepreneurship.
Steve Jobs once asked his former boss, Nolan Bushnell, “How in the world do you figure out what the next big thing is?" Bushnell answered:
"You’ve got to figure out how to put yourself into the future and ask what you want your computers to be able to do"
I rarely think about the future of technology this way but the technique looks imminently reasonable after Bushnell points it out. Excellent advice for an aspiring entrepreneur.
Lately I have been reading Our Mathematical Universe: My Quest for the Ultimate Nature of Reality by Max Tegmark. While the book largely deals with cosmology, the origin and development of the universe, the part I find more fascinating is the scientific methods used by the scientists. Given that the universe began around 14 billion years ago, many of the problems are very complex. One technique is the reverse of Bushnell's approach. The scientists know the status of the universe today from observations. Therefore, they assume what had to happen 14 billion years ago according to quantum mechanics to explain today's universe. Then they look for empirical eveidence to support their assumptions about the start of the universe. A lot of math later, if the empirical observation matches the assumption about the beginning of the universe, then the assumption is correct. If the assumption is proven wrong, it may actually mean that the question asked in making the assumption was wrong. Another example of the importance of asking the right question.
Bushnell says to assume the future you want. The cosmologists assume the past to prove their theories. No guessing about the future or the past. Assume what you need and then develop it.
Courtesy of the fine thinkers at Marginal Revolution, I came across this article--"Machines v. Lawyer". The article documents the decline in the number and earning power of lawyers over the last few years. Applications to law schools have declined 40% since 2004 and small law firms are particularly hard hit.
The article makes two important points:
Increasing computerization and artificial intelligence are replacing legal services previously provided by lawyers (humans)
The cost of legal services may decline because the IT-based products are cheaper than humans
I tell my students interested in law to go into environmental and intellectual property (IP) law. I think the proliferation of open source hardware and software, 3-D printers, the Maker movement, etc. are all going to challenge current IP law to evolve and change.
I do not see litigators being replaced by computers, but the cost of litigation may go down. Regrettably, that might trigger more lawsuits. I think most real estate attorneys are toast (replaceable by automation), except for the few that do large land acquisitions, financings or construction loans. Corporate/ transaction attorneys are very likely to decline in numbers. Many very good legal documents for investment and transactions are available online prepared by first class attorneys. Maybe you still need a lawyer but they can start with these drafts and save countless hours and fees.
I think accelerating the decline in lawyering is a big business opportunity. Applying AI, big data and application development to the process of law to reduce headcount, billable hours and complexity (using standardized forms) is a big market opportunity.
To all my friends who are corporate/transaction attorneys, develop industry expertise, become expert in specialized transactions, learn to speak more foreign languages and have your children learn to code.
About two years ago I visited a friend in NYC who was part of a team doing a mobile shopping app. He gave me a link to download the beta app to my iPhone. To login to the app I had to use either my Facebook or Twitter account, both of which I had at that time. I asked him why I had to use either of those accounts to login and he told me that the company did not want users who did not have social media accounts. I said I thought that was a bit presumptuous, especially for a startup, but he assured me that the special login requirement would not affect customer adoption of the app. An historical note, the app never caught on and my friend moved to a different startup.
Yesterday I read a story, "Users Are Growling About Apps That Require Facebook", where they describe a dog app where you have to login with Facebook. Given that dogs are not allowed to have Facebook pages and friends (which I personally think is very unfair), the writer points out that requiring a Facebook login for a dog app is illogical. This story made me think of my friend in NYC.
If the login does not facilitate ease of use or increase the value of the app to the user, keep it as simple as possible.
This really interesting article on Brain Pickings referenced a quote from NPR:
"NPR recently shared a survey that found 40% of the American public doesn’t believe the world is more than 6,000 years old."
Of course, when was the last time you saw a reference to something older than about 2000 BC or about 4,000 years ago? Given the peculiar nature of Americans, we rarely reference anything that pre-dates the founding of the U.S. All of this just points out that we have a predjudice in the Kahneman sense with respect to time or perhaps more precisely history. The brain conserves energy by considering a very short timeline compared with the actual time that humans have walked the earth.
What this suggests is that we do not study and understand sufficiently the basic nature of humans, which may be constraining our abilityto really understand consumer problems and develop new business concepts.
If we go back about 40,000 years, two interesting things happened which enabled early humans to advance beyond their then current status as hunter gathers. First humans learned to trust beyond their immediate family and tribe and technology appeared for the first time. Technology allowed abundance and scarcity, fundamental economic concepts, to appear, which led to sharing (which required trust) and specialization. Specialization led to barter and the emergence of those efficient organizations called "firms", another economic term. However, as specialization and firms succeeded and made lives better, barter broke down and the more efficient money emerged. Congratulations, we have now reached approximately 12,000 BC.
Now we can argue about whether trust preceded sharing or not and whether firms came before specialization, but what is clear here is that the fundamentals of civilization include:
trust
sharing
firms
money
You might ask why this is important. The reason is that every time you change one of these four concepts a huge new market opportunity emerges. A huge new market opportunity emerges! For example, look at this abridged list of changes in money:
gold coins
paper money
bank accounts
credit cards
Pay Pal
Bitcoin
Every single change created a huge market opportunity and large companies that recognized the opportunity.
If you can insert trust or sharing into a new situation, one spawns a huge market opportunity. Ebay succeeded when it solved the issue of buyers trusting sellers, Airbnb allowed us to share our real estate. Amazon redefined the retail "firm".
Not a lot has been written about the techniques to identify the large market opportunities. Change one of the four fundamentals of civilization and you may have a large opportunity.
For a few years now I have been saying that the big market opportunity is in curating data to make it more usable and accessible. This quote from Stephen Wolfram, Founder of Wolfram Alpha, describes the opportunity well.
"One of the objectives is we’re dealing with curating the world. Curating the world involves knowing all the chemicals that exist and things like that. It also involves knowing all the programming languages that exist and being able to interface with those things, and knowing all the connected devices that exist and being able to interface with those kinds of things as well. It’s really using the same meta methodology but applied to all these different areas."