Brad Feld

Tag: security

Use of HTTPS (which stands for HTTP Secure) has grown from 13% of the top one million websites to 19% in the past year. With major media sites such as NYTimes.com joining the movement, now over half of all web requests are served securely to the browser.

Two years after the launch of Let’s Encrypt, this is fantastic progress. In this new era of state sponsored hacking and fully professionalized cybercrime, it is heartening to see engineers get seriously organized and tackle something on the scale of securing the entire web.

Even a few years ago I would have been skeptical this would be possible. Until very recently, setting up HTTPS meant purchasing and managing certificates and configuring them correctly to work with your web server. This is a non-trivial effort and many people and companies didn’t bother with it. This was especially true with the long tail of websites, but also included many major ones.

The drive to HTTPS the web did not happen by accident. It is akin to an old-fashioned barn raising but on a global scale, organized by engineers with good intentions to protect users, and ensure that the web remains a vibrant and trusted ecosystem into the future.

A few things had to come together for securing (HTTPS’ing) the web to become reality:

  1. The global internet security community had to get serious about this problem. With Google now stiffly penalizing the SEO of non-HTTPS sites, and Chrome and Firefox escalating browser warnings, website owners are rapidly supporting security.
  2. Certificate management had to become cheap and easy. We have Let’s Encrypt to thank for that.
  3. Website technology providers had to make HTTPS a turnkey experience. This is happening now.

When you bring up Feld Thoughts in your browser, you should see something like the following:

Pantheon, one of our portfolio companies, hosts my website and made this happen, in zero clicks. With Pantheon, HTTPS just works out of the box and they are now providing HTTPS (powered by Let’s Encrypt) for all 200,000 of their websites, free of charge. Even better, it is powered by their new Global CDN, with over 30 points of presence and the most sophisticated Drupal and WordPress caching technology available on the market.

I am happy with what the Pantheon team has built. They didn’t cut any corners:

  • HTTPS is available for free as a turnkey service for all plan levels
  • Because this feature is deeply integrated into their CDN, you don’t pay a performance penalty for deploying HTTPS
  • Their CDN speeds up pageloads by 50% to 300% by caching full page content (traditionally almost impossible to achieve with dynamic CMS systems)

When you load your website, do you see the happy green box of Secure “https”? If so, nice work! If you don’t, do your website visitors a favor – email your website developer and ask them to help you set it up.

If they tell you it is too much work and/or too expensive, then you should look into changing hosts. Email me if you’d like an intro to the gang at Pantheon.


The CEO of our portfolio company JumpCloud, Raj Bhargava, reached out to me after my rant on digital security earlier this week.

Since JumpCloud plays in the world of digital authentication, they are well versed in security issues and are helping organizations with securely connecting their users to their IT resources (systems, app, storage, WiFi, servers, etc.).

He pointed me to a five-step video they put up about how IT organizations can step-up their WiFi security – not only from KRACK, but from man-in-the-middle attacks and from having poor WiFi security hygiene.


We live in a digital world with a false sense of security. While watching Blade Runner 2049 I smiled during a scene near the end where Deckard says to K, “What Have You Done?!?!?” I expect that this false sense of security will still exist in 2049 if humans manage to still be around.

The first big piece of security news this weekend was ‘All wifi networks’ are vulnerable to hacking, security expert discovers. It only a Severe flaw in WPA2 protocol leaves Wi-Fi traffic open to eavesdropping, but, well, that’s most Wi-Fi networks. If you want the real details, the website Key Reinstallation Attacks: Breaking WPA2 by forcing nonce reuse goes into depth about KRACK Attacks. And yes, KRACK is already up on Wikipedia.

Here’s the summary, which is mildly disconcerting (that’s sarcasm if you missed it …):

The weaknesses are in the Wi-Fi standard itself, and not in individual products or implementations. Therefore, any correct implementation of WPA2 is likely affected. To prevent the attack, users must update affected products as soon as security updates become available. Note that if your device supports Wi-Fi, it is most likely affected. During our initial research, we discovered ourselves that Android, Linux, Apple, Windows, OpenBSD, MediaTek, Linksys, and others, are all affected by some variant of the attacks. For more information about specific products, consult the database of CERT/CC, or contact your vendor.

I was cruising along in my naive security bliss this morning when I saw the article Millions of high-security crypto keys crippled by newly discovered flaw. It turns out that a key RSA library that is widely used has a deep flaw in it and has been being used to generate weak keys since 2012.

A crippling flaw in a widely used code library has fatally undermined the security of millions of encryption keys used in some of the highest-stakes settings, including national identity cards, software- and application-signing, and trusted platform modules protecting government and corporate computers.

The weakness allows attackers to calculate the private portion of any vulnerable key using nothing more than the corresponding public portion. Hackers can then use the private key to impersonate key owners, decrypt sensitive data, sneak malicious code into digitally signed software, and bypass protections that prevent accessing or tampering with stolen PCs. The five-year-old flaw is also troubling because it’s located in code that complies with two internationally recognized security certification standards that are binding on many governments, contractors, and companies around the world. The code library was developed by German chipmaker Infineon and has been generating weak keys since 2012 at the latest.

I’m sure there will be a lot more written about each of these flaws in the next few days. I expect every security vendor is hard at work this morning figuring out what to patch, how to do it, what to tell their customers, and how to get all the patches out in the world as fast as possible.

The constraint, of course, will be on the user side. A large number of customers of the flawed products won’t update their side of things very quickly. And many more bad guys now have a very clear roadmap for another attack vector with high vulnerability.

Be safe out there. Well, at least realize that whatever you generate digitally isn’t as safe and secure as you might think it is.


Yesterday I wrote about getting stuck in an hour long reading loop on the Apple / FBI situation. As much as I didn’t want it to happen again today, it did. More on that in a minute.

But first, I want to encourage you to go watch the movie Race which is the story of Jesse Owens and the 1936 Berlin Olympics. It was superb and the double entendre of the title played out for a full two hours as the movie made us think hard about race in America in the 1930s and what was happening in Nazi Germany at the same time. I also thought the acting by the primary characters, including Stephan James (Jesse Owens), Jason Sudeikis (Larry Sanders – Owens coach), and Barnaby Metschurat (Joseph Goebbels – Nazi propaganda minister) was incredible.  Metschurat was a special bonus – he brought an extremely uncomfortable feeling of deep menace to every scene he is in.

At the end of the movie, the entire theater clapped (that doesn’t happen very often.) As Amy and I walked to our car, we commented that while we’ve made a lot of progress since 1936, we’ve got a very long way to go. On our way home, we talked about current events against a backdrop of 1936 racism in America and a systemic racism in preparation for a genocide in Nazi Germany. I read the NPR review of Race this morning after seeing the movie and agreed with everything in it.

This morning, as I was going through my morning reading stuff online, I fell down the FBI / Apple rabbit hole again. It’s interesting how the substantive analysis is improving every day, and I finally feel like I have my mind around the technical issue, the mistakes the government already made that prevented it from getting the data it wanted, Apple’s consistent and appropriate supportive behavior with law enforcement up to this point, the government overreach that is now happening, and the correctness of Apple’s position. In other words, nothing has changed in my opinion that Apple is in the right in this situation, but I can now explain it a lot more clearly.

If you want to spend time in the rabbit hole with me (I’ve set up a couch, have poured some drinks, and have nice music playing down here), here are some things to read.

As humans, I think it’s important to remember that the TechnoCore is paying attention to this and watching everything we do. Even if they are looking back at us from the future, what we are doing now matters.


Super Cooper (our new dog – now one year old) woke me up at 4:45 this morning so I got up, let him out, got a cup of coffee, sat down in front of my computer, and spent the next hour going down the rabbit hole of the FBI / Apple phone unlock backdoor encryption security controversy.

After an hour of reading, I feel even more certain that Apple is totally in the right and the FBI’s request should be denied.

The easiest to understand argument is Bruce Schneier‘s in the Washington Post titled Why you should side with Apple, not the FBI, in the San Bernardino iPhone caseHis punch line is extremely clear.

“Either everyone gets security or no one does. Either everyone gets access or no one does. The current case is about a single iPhone 5c, but the precedent it sets will apply to all smartphones, computers, cars and everything the Internet of Things promises. The danger is that the court’s demands will pave the way to the FBI forcing Apple and others to reduce the security levels of their smart phones and computers, as well as the security of cars, medical devices, homes, and everything else that will soon be computerized. The FBI may be targeting the iPhone of the San Bernardino shooter, but its actions imperil us all.”

Given that the law being used to try to compel Apple to do this is based on the All Writs Act of 1789, precedent matters a lot here. And, if you thought the legal decisions from 1789 anticipated the digital age, please fasten your seat belts – or maybe even encase yourself in an impermeable bubble – for the next 50 years as it’s going to get really complicated.

Once I got past the advocacy articles like Why Apple Is Right to Challenge an Order to Help the F.B.I. in the NY Times, I read a few of the “what is really going on here” articles like Wired’s Apple’s FBI Battle Is Complicated. Here’s What’s Really Going On. The context was starting to repeat, so I got it at a high level but wanted to understand the technical underpinnings of what was happening.

Since it’s the Internet, it was pretty easy to find that. The Motherboard article Why the FBI’s Order to Apple Is So Technically Clever was a good start. Dan Guido’s Trail of Bits post Apple can comply with the FBI court order was super interesting since he only focused on the technical aspects rather – focusing on feasibility rather than getting tangled up in whether it’s a good idea or not. Ars Technica has an article that ads a little in Encryption isn’t at stake, the FBI knows Apple already has the desired key.

I think wandered around a few random articles. Troy Hunt’s Everything you need to know about the Apple versus FBI case has some great historical context and then unloads with current activity including Google’s CEO Sundar Pichai’s Twitter support of Apple / Tim Cook. I ended with Why Tim Cook is wrong: A privacy advocate’s view.

Interestingly (and not surprisingly), the titles don’t reflect the actual critical thinking you can derive from reading the article, so just scanning headlines creates huge bias, whether you realize it or not. When I started reading the various articles, I immediately forgot most of the headlines and was surprised by some of them when I put this post together since the headline didn’t correspond with the message of the post.

I expect there will be lots of additional commentary on this as it continues to unfold in court and in public view. What I read, and thought about this morning, reinforced my view that I’m very glad Tim Cook and Apple are taking a public – and loud – stand against this. We are going to have to deal with stuff like this with increasing frequency over the next 50 years and what happens in the next decade is going to matter a lot.


Following is a guest post from my friend Eliot Peper. I met Eliot several years ago when he approached me about his first book. I loved his writing and FG Press went on to publish Eliot’s first two books – Uncommon Stock: Version 1.0 and Uncommon Stock: Power Play.

Eliot’s third book, Uncommon Stock: Exit Strategy came out recently and the topic is particularly timely. Enjoy some deeper thoughts of his on why. Oh – and grab Eliot’s books – they are awesome.

Our institutions are failing to protect us. In fact, they’re not even trying. That wasn’t what I set out to discover when I started drafting my first novel. I just wanted to write a page-turner about tech startups with enough real grit to make readers think (true fans may remember that I noted my original inspiration right here in a previous guest post). To research the book, I interviewed federal special agents, financial service executives, money laundering investigators, cybersecurity experts, investors, and technologists in order to deepen the story’s verisimilitude.

The novel turned into a trilogy and along the way I discovered how fact can be far more disturbing than fiction (a point of frustration for novelists). Every day, our government officials, bankers, and corporate leaders are betraying our trust through shortsightedness and technical ignorance.

The now-infamous breach of The Office of Personnel Management by state-sponsored Chinese hackers shocked the nation. Detailed background files on more than twenty-two million Americans were stolen. The pilfered data included medical history, social security numbers, and sensitive personal information on senior officials within The Department of Defense, The Federal Bureau of Investigation, and even The Central Intelligence Agency. The national security implications are staggering.

The emperor may have no clothes but he doesn’t stand alone. Every year, hundreds of millions of dollars are spirited away from major financial institutions. The United Nations estimates that organized crime brings in $2 trillion a year in profits and the black market makes up 15–20% of global GDP.

How do cartel bosses, arms dealers, and human traffickers stash their cash? By working with corrupt insiders, exploiting legal loopholes, lobbying crooked politicians, and taking advantage of the same kinds of technical weaknesses that made the OPM hack possible. They are only able to get away with it because banks and regulators turn a blind eye or, more often, don’t even know when it’s happening.

Large organizations like government agencies and international financial institutions started incorporating software into their operations decades ago. Ever since, they have consistently chosen to pile new updates on top of old code rather than rebuild systems from the ground up. Why? In the short run, it’s cheaper and easier to address the symptom instead of the cause. Now, that shortsightedness is catching up with them.

All of this is just what we know about already. It takes a median of 229 days for data breaches to even be discovered. That’s a long time for criminals to be inside our systems, building new backdoors for future exploitation. Worse, institutions are loath to report breaches even when they are uncovered for fear that our trust in them will degrade even further.

The software powering the digital infrastructure of our institutions is a mess of half-measures, lost source code, and mind-boggling integrations. It’s like a vault built out of swiss cheese, a house resting on a matchstick foundation, or the plot of a telenovela. You can choose your own metaphor, but every hole is a VIP ticket for society’s antagonists.

And that’s not all. In a study released earlier this month, The Government Accounting Office found that many federal examiners in charge of bank information security audits have little or no IT training. They also discovered that regulators are not even doing comparative analysis on system-wide deficiencies, limiting their scope to individual banks. Worse, the National Credit Union Administration lacks the authority to examine third party service providers to credit unions, leaving large segments of their systems beyond the jurisdiction of examiners. It’s painfully ironic that at a time when the NSA terrifies us with its digital omnipotence, so many government agencies can’t get their act together for legitimate enforcement. Our watchdogs are asleep on their feet.

Whether their endgames are espionage or financial malfeasance, we’re making it too damn easy for bad guys to do their dirty work. I was only trying to make my books feel real but now reality is forcing me to suspend disbelief. It makes for great plot twists, but verisimilitude isn’t worth this level of vulnerability.

These are big problems. Big problems always represent big opportunities for creative founders. Mattermark just released their first report on the hottest cybersecurity startups. But we need fixes that are even more fundamental than security. We must rebuild the technical infrastructure and human governance systems that shape our institutions. That change might come from an extraordinarily dedicated internal leader or it might emerge from a garage in Boulder.

We need hackers, makers, artists, and independent thinkers. We need to play smarter and think long-term. We need to call our leaders to action. We need to educate ourselves and build a future in which we can thrive, not fight to survive.  


An increasing number of companies that I work with are using PGP to encrypt certain email. While they are comfortable sending a lot of email unencrypted, there are periodic threads that different people want to have encrypted for a variety of reasons, some rational and some not.

Each company is dealing with this a different way. Suddenly I find myself managing a bunch of public keys in different PGP tools on different computers. I started by going with the recommendation of each company and predictably found myself managing multiple solutions that sort of worked some of the time.

Last night I was on a hangout with one of the CEOs trying to troubleshoot the problem we were having with the implementation his company was using. After 15 minutes of fighting with a Chrome plugin, we gave up. Of course, when I went to a different computer, it worked just fine.

This seems like such a simple thing for Google (and Yahoo and Microsoft) to build into their email clients, especially the browser based ones. Keep the keys locally (or even in Dropbox or iCloud). Encrypt and decrypt from within the browser. Only transmit encrypted email. Only display the decrypted email.

Why hasn’t this been done yet? Am I missing something obvious?


Raj Bhargava (CEO of JumpCloud) and I got into a discussion at dinner the other night about the major security hacks this past year including Sony, eBay, Target, and The Home Depot. Raj spend over a decade in the security software business and it was fascinating to realize that a common thread on virtually all of these major compromises was hacked credentials.

I felt this pain personally yesterday. A bunch of random charges to Match.com, FTD.com, and a few other sites showed up on Amy’s Amex card. We couldn’t figure out where it got stolen from, but clearly it was from another online site somewhere since it’s a card she uses for a lot of online purchases, so I cancelled it. Due to Amex’s endless security process, it took almost 30 minutes to cancel the card, get a new one, and add someone else to the account so I wouldn’t have to go through the nonsense the next time.

In my conversation with Raj, we moved from basic credential security to the notion that the number of sites we access is exploding. Think about how many different logins you have to deal with each day. I’m pretty organized about how I do it and it’s still totally fucked.

Every major new service is managed separately. Accounts to AWS or Google Compute Engine or Office 365 are managed separately. Github is managed separately. Google Apps are managed separately. Every SaaS app is managed separately. All your iOS logins are yet another thing to deal with. The only thing that isn’t managed separately are individual devices – as long as you have an IT department to manage them. Oh wait, are they managing your Mac? How about your iPhone and other BYOD devices? Logins and passwords everywhere.

Raj’s assertion to me at our dinner was that there are too many different places, and too many scenarios, where something can be compromised. For instance, some companies use password managers and some don’t. Some companies that take password management to an individual level – where a single employee manages her own passwords – end up with many login / password combinations which are used over and over again. Or worse, the login / password list ends up in an unencrypted file on someone’s device (ahem Sony.)

If you are nodding, you are being realistic. If you aren’t nodding, do a reality check to see if you are in denial about your own behavior or your organization’s behavior. Think about how new services enter your organization. A developer, IT admin, marketing person, executive, or salesperson just signs up for a new online service to try. When doing so, which credentials do they use? If it is connecting to your company’s environment, it’s likely they are using your organization’s email address and a verbatim password they use internally as well. That’s a recipe for getting hacked.

So, Raj and I started discussing solutions. Some of it may just be unsolvable as human nature may not let us completely protect users online. But it seems like there are areas where we can make some immediate headway.

  • Secure directory services (the approach JumpCloud is taking)
  • Multi-factor authentication has become all the rage (I use it)
  • Different strong passwords for each service, possibly via a password manager like LastPass (which is what I use)

What other approaches exist that would scale up from small (10 person orgs) to large (100,000 person orgs) and provide the same level of identity and credential security?


Nothing Is Impossible - It's Basic PhysicsWhen LinkedIn posted LinkedIn Intro: Doing the Impossible on iOS I was intrigued. The post title was provocative (presumably as intended) and drew a lot of attention from various people in the security world. Several of these posts were deeply critical which generated another post from LinkedIn titled The Facts about LinkedIn Intro. By this point I had sent emails to several of my friends who were experts in the email / SMTP / IMAP / security ecosystem and was already getting feedback that generally trended negative. And then I saw this post titled Phishing With Linkedin’s Intro – a clever phishing attack on Intro (since fixed by LinkedIn).

All of this highlights for me my general suspicion around the word “impossible” along with the complexity that is increasing as more and more services interconnect in non-standard ways.

One of the thoughtful notes I got was from Scott Petry – one of my good friends and co-founder of Authentic8 (we are investors).  Scott co-founded Postini and a bunch of email stuff at Google after Google acquired Postini in 2007. Following are his thoughts on LinkedIn Intro.

I am all for seamless integration of services. And while “man in the middle” is commonly seen as a pejorative, the MITM approach can enable integrations that weren’t readily available previously.

Postini, which started life as a spam filtering service became a huge email MITM enabling all sorts of email processing not available on the mail server itself. Seamless integration was a big part of our success – companies pointed their mx record to Postini, Postini filtered and passed the good stuff on to the company’s mail server. While controversial in 1999, DNS redirect-based services have become accepted across all ports and protocols. Companies such as Cloudflare, OpenDNS, Smartling, and more all offer in-line services that improve the web experience through DNS-level MITM-type model. Simple to configure and high leverage. They just aren’t thought of as MITM services.

Extending functionality of services by authorizing plug-ins to gain access to your data can be really useful as well. I use Yesware in Gmail to help track messages and automate responses when I send company-related marketing/sales emails. It’s a great service, enabling functionality not previously available, and you could think of this as a man in the middle as well. It is important to point out that in the case of Yesware and DNS style integrations, I need to explicitly approve the integration. The details are made available up front.

New levels of integrated services are coming online daily. And vendors are getting more and more clever with APIs or skirting them altogether in order to get their app in front of us. It’s natural to be sucked in by the value of these services and it’s easy to overlook any downside. Especially given that for many of them, the people who are paid to think about security ramifications aren’t in the loop. They can be installed and configured by end users, not IT. And most users take the security for granted … or overlook it all together.

Last week, on the LinkedIn engineering blog, details on the new LinkedIn Intro app were shared. Intro integrates dynamic LinkedIn profile information directly into the iOS email app. It didn’t get much attention when it was launched, but once the engineering team blogged about how did the impossible to integrate with the iOS email client, the story blew up.

Details on their approach here (https://engineering.linkedin.com/mobile/linkedin-intro-doing-impossible-ios).

LinkedIn Intro does a beautiful job of auto-discovering your environment and auto-configuring itself. A click or two by the user, and they’re up and running with active LinkedIn data in their email app.

All this clever engineering hides the fact that LinkedIn is accessing your email on your behalf. Intro uses an IMAP proxy server to fetch your mail where they modify it, then deliver it to your iPhone. Classic Man in the Middle.

If you remember setting up your mail service on your iPhone, it is a bit clunky. You need to know the host names of your service, the ports, encryption values, etc. It isn’t easy. But you don’t do any of this with Intro. Instead of going through the usual configuration screens on iOS, Intro uses Apple’s “configuration profiles” capability auto discover your accounts and insert their servers in the middle. And since it uses OAuth to log in, it doesn’t even need to ask for your credentials.

They do such a good job of hiding what they’re doing that the significance of the data issues were lost on everyone (except the security researchers who raised the brouhaha).

This weekend, LinkedIn made another blog post. In their words, they wanted to “address inaccurate assertions that have been made” and “clear up these inaccuracies and misperceptions”. The post, here (https://blog.linkedin.com/2013/10/26/the-facts-about-linkedin-intro/) followed the PR playbook to the letter.

With one small exception concerning a profile change, the post does nothing to clear up inaccuracies and misperceptions. Instead, their post lists their reassurances about how secure the service is.

Even with these assurances, the facts remain. LinkedIn Intro pipes your email through their servers. All of it. LinkedIn Intro inserts their active web content into your email data. At their discretion.

With its clever engineering, Intro became a violation of trust. And worse, potentially a massive security hole. If the research community didn’t raise the alarm, the details of Intro’s integration wouldn’t have hit the radar.

I think the lesson here is two-fold:

1) We live in a world where our data is scattered across a variety of disparate systems. It is incumbent on us to understand the risks and weigh them against the reward of the shiny new app promising to make our lives better. If something appears to be too good to be true, it probably is.

2) Vendors need to be more transparent about what they’re doing with our data. Especially if the vendor has a spotty reputation in privacy and security realms. If they’re not, the Internet community will do it for you.