summaryrefslogtreecommitdiff
path: root/_posts/2019-08-06-buzzword-driven-pop-infosec.md
blob: ab656b214d8ae7a8ff3d7ef18cd84f12c7850eec (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
---
title: Buzzword-Driven “Pop Infosec”
description: >
  Information Security has a buzzword problem.
---

Information security is complicated. When you combine that with the fact that an
increasing number of people seem to also consider it to be very important, the
result is something I like to call “pop infosec.”

As in pop science or popular psychology, making information security accessible
often involves simplifying concepts to improve their general palatability which
results in laypeople overestimating their confidence. This [“easiness
effect”](https://en.wikipedia.org/wiki/Easiness_effect) has been studied in the
context of science communication, and likely applies to information security in
a parallel sense.

While helping people protect themselves from security threats is certainly
laudable, it’s important to do it responsibly in order to maximize benefit and
minimize harm. Unfortunately, a few recent events I’ve noticed personally
suggest that this is not happening.

<!--more-->

## “The Cloud”

I recently read (part of) an article in the Wall Street Journal (before I got
cut off by their paywall) about a data breach which read:

> The data was stored on Amazon.com Inc.’s cloud, according to a federal
> criminal complaint and people familiar with the matter. The avenue of entry,
> the companies and investigators said, was a poorly configured firewall [...]
>
> Both companies say controls around the data, rather than use of the cloud,
> were the problem. Still, the data was stored in the cloud, raising questions
> about whether Capital One put insufficient safeguards in place to lock down
> customer records when it adopted cloud technology.

Clearly, the reporter has decided to inject some good old “ZOMG all ur dataz are
in teh cloud” fear mongering. That aside, this is some of the worst analysis
I’ve seen. Imagine you’re trying to keep a box of papers safe; the problem isn’t
you kept the box in a self storage unit instead of in your house, the problem is
that you left the door unlocked. If the company had a poorly configured cloud
environment, why should I expect them to properly configure a firewall in some
other environment?

In other words, the WSJ has this _almost_ right: it does raise questions about
whether sufficient safeguards were in place, but these questions are orthogonal
to any particular technologies or events.

This is simply confusion of correlation and causation. To cite a common example,
suppose you thought drowning deaths were a large problem and you learned that
there was a strong correlation between ice cream sales and drowning deaths.
Recognizing that swimming and eating ice cream are simply both summertime
activities, one would of course be mistaken to conclude that banning ice cream
would reduce the number of drowning deaths. Likewise, as more companies start
using cloud services, we should certainly not be surprised that more
vulnerabilities affecting cloud services are discovered.

<aside>
For the record, I certainly do not believe that “the cloud” is a panacea, but
that security is only meaningful relative to a threat model which may or may not
involve where hardware happens to be physically located.
</aside>

## “High Severity Vulnerability”

Apparently, all that needs to happen for lots of time and energy to be wasted
and have a big fuss is to label something as “high severity.”

Consider this notice I saw when I logged on to GitHub one day:

![Screenshot of a GitHub alert which reads “We found a potential security
vulnerability in one of your
dependencies.”](/assets/images/github-vuln-notice.png)

Clicking “See security alert” lead me to the following notice:

![Screenshot of a GitHub notice describing a high severity CVE issued for axios
and recommending to update from 0.18.0 to
0.19.0](/assets/images/github-vuln-detail.png)

I looked up CVE-2019-10742 and quickly located the relevant pull request for
axios. To save you some clicks, axios is a JavaScript HTTP client library which
includes an API like this:

```
axios
  .get('http://example.com/evil.txt')
  .then(console.log)
  .catch(console.error);
```

Optionally, you can use the `get` API like this:

```
axios
  .get('http://example.com/evil.txt', { maxContentLength: 100 })
  .then(console.log)
  .catch(console.error);
```

in which case axios is expected to abort the response and reject the promise
after more than 100 bytes have been received. However, there was a bug in the
implementation where the promise would be rejected but reading from the stream
would continue, hence the CVE. But look at the code snippets above! **This CVE
only applies to codebases which actually _use_ the `maxContentLength` option!**
If you weren’t using `maxContentLength`, you weren’t expecting any responses to
be truncated in the first place. Nonetheless, I found lots of comments like

> will need to roll out a fix for compliance asap

> When will this issue be fixed? I have received tons of mail from github
> regarding axios.

> I can help work on it if needed, but we would need to get rid of axios
> otherwise on an open source SDK I’m actively maintaining

> we really need to get a fix out, especially seeing as we’re now getting Github
> notifications on this.

Thanks to the way GitHub shows references from other issues/pull requests, I was
also able to see how people were responding to the vulnerability alert within
their own code. Of the random sampling of projects with linked issues/PRs I
audited, none of them actually used the `maxContentLength` option, but dutifully
updated the version of their axios dependency and considered the issue resolved.

In reality, nothing about these projects’ security posture actually changed
though their maintainers may have _thought_ they did. The real resolution for
many of these projects would be to first consider the impact if
`maxContentLength` was not set or respected, and if appropriate, update the
dependency **and actually use `maxContentLength`**.

Of course, this is not the fault of the developers. Collectively, one of the
biggest things we tell people about protecting themselves from vulnerabilities
is to keep their software up to date. In this case, developers saw a helpful
message saying to update their dependencies, they updated them (possibly even
with the automatic click of a button!), and they _still_ might have been
vulnerable.

## In Conclusion

Information security professionals need to be judicious about how and what is
communicated with or recommended to the public. As we’ve seen, “pop infosec” can
be ineffective or even harmful. And journalists need to ensure that their
reporting is consistent with evidence-based research.

I have said before that security is not a checklist, it is a mindset. You can’t
“be secure” by following some steps you find on line or by avoiding certain
technologies. The most effective way to improve your security posture is to hire
smart people to think critically about your environment.