1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 |
CVE: CVE-2016-1645 CWE: - 130 - 119 bugs: - 587227 repo: vccs: - notes: This is the oldest place I could find this function referenced. At the location, the creator creates a static image reading method, which may have caused the issue. commit: fixes: - note: Added a check for overflows commit: - note: Roll PDFium e4ac336..28de044 commit: 7045ee0a8b4b19d298c67556ee0b92cd575e1303 - note: | Part of a separate repo for OpenJPG Allegedly fixed the offset computations in the method, which should solve the out-of-bounds errors. commit: bounty: date: amount: references: [] lessons: yagni: note: applies: question: | Are there any common lessons we have learned from class that apply to this vulnerability? In other words, could this vulnerability serve as an example of one of those lessons? Leave "applies" blank or put false if you did not see that lesson (you do not need to put a reason). Put "true" if you feel the lesson applies and put a quick explanation of how it applies. Don't feel the need to claim that ALL of these apply, but it's pretty likely that one or two of them apply. If you think of another lesson we covered in class that applies here, feel free to give it a small name and add one in the same format as these. serial_killer: note: applies: complex_inputs: note: applies: distrust_input: note: "The vulnerability was the result of the input being given as an image in a PDF file, with the metadata causing a buffer overflow because the system was not expecting certain values.\nIf they had validated their input and been highly distrustful, they likely could have avoided this problem. \n" applies: true least_privilege: note: applies: native_wrappers: note: applies: defense_in_depth: note: applies: secure_by_default: note: applies: environment_variables: note: applies: security_by_obscurity: note: applies: frameworks_are_optional: note: applies: reviews: - 1728813002 - 1756483002 upvotes: 18 Nickname: Yngwie Malformed PDF mistakes: answer: "There was a combination of design mistakes, coding mistakes, and either laziness or constraints.\n\nFirst, the team did not account for buffer overflows when they originally coded the method. This could have been avoided with \ninput sanitization, and checking the input to handle any potential out-of-bounds problems. \n\nOn top of failing to handle the problem originally, they did not write any unit tests to attempt to catch the problem before it happened. \nIt is possible that even with unit testing, they could not have caught this, but no tests were written period. Not at the VCC, and not at the fix.\nThere were 4-5 commits between the VCC and the fix, which would make it seem like writing test cases would be a good idea, as so many others had failed to fix \nit previously. \n\nIf the team had set up the method in such a way that it validated inputs and properly handled or expected buffer overflows, then this problem might have been avoided\nfrom the start. I am sure there were some difficulties though, as it appears that OpenJPEG is just a library that Chromium uses, so they were not necessarily testing\nor intending its use for the things that Google eventually utilized it for\n \n" question: | In your opinion, after all of this research, what mistakes were made that led to this vulnerability? Coding mistakes? Design mistakes? Maintainability? Requirements? Miscommunications? Look at the CWE entry for this vulnerability and examine the mitigations they have written there. Are they doing those? Does the fix look proper? Use those questions to inspire your answer. Don't feel obligated to answer every one. Write a thoughtful entry here that those ing the software engineering industry would find interesting. announced: '2016-03-13 18:59:05.060000000 -04:00' subsystem: name: OpenJPEG answer: Based on the source code directory names and the git blame hierarchy listing question: | What subsystems was the mistake in? Look at the path of the source code files code that were fixed to get directory names. Look at comments in the code. Look at the bug reports how the bug report was tagged. Examples: "clipboard", "gpu", "ssl", "speech", "renderer" discovered: date: '2016-01-14' answer: "The vulnerability was found by ZeroDayInitative working with an anonymous contributor. ZDI replicated this bug on Windows 8.1 with Google Chrome 50,\nThey listed out the different components, such as the metadata and the picture. They were able to crash the sandboxxed Chrome. \nIt appears they originally found the problem on 14 Jan 2016, but released the report 16 Feb 2016.\n" google: false contest: question: | How was this vulnerability discovered? Go to the bug report and read the conversation to find out how this was originally found. Answer in longform below in "answer", fill in the date in YYYY-MM-DD, and then determine if the vulnerability was found by a Google employee (you can tell from their email address). If it's clear that the vulenrability was discovered by a contest, fill in the name there. The "automated" flag can be true, false, or nil. The "google" flag can be true, false, or nil. If there is no evidence as to how this vulnerability was found, then you may leave the entries blank except for "answer". Write down where you looked in "answer". automated: false description: "Attackers could write the metadata for an image, or the information that makes up the image, in such a way that when it was read in certain programs or \ntools, it had the potential to create Denial of Service attacks which would crash applications or machines. Google used this library to interpret and render the images inside of PDF for viewing,\nin addition to getting the metadata for things like image tags for describing the photos, and alt-text for screen-readers and accessibility settings. \nThe crash was caused by incorrectly converting integers to other types or from negative-to-positive, and would result in \"integer signdness errors\". \nThese errors could lead to out-of-bounds index writing, where the attacker could inject or execute code where they're not supposed to be able to, or number casting\nerrors, which could violate the integrity of the data. \nThis vulnerability was a problem for its potential to affect the Availability of systems and the integrity of the data. \nThe solution was to upgrade to r3002 of OpenJPEG, which based on intuition and reasoning, added checks for the data to ensure it did not run over. \n" unit_tested: fix: false code: false answer: "It does not appear that the original code contained unit tests, nor did the fix add any unit tests in the file or as a separate file. The only file\nchanged was the j2k.c file, and that was changing functions, pointers, and how it handled overflows and buffer sizing. \n" question: | Were automated unit tests involved in this vulnerability? Was the original code unit tested, or not unit tested? Did the fix involve improving the automated tests? For the "code" answer below, look not only at the fix but the surrounding code near the fix and determine if and was there were unit tests involved for this module. For the "fix" answer below, check if the fix for the vulnerability involves adding or improving an automated test to ensure this doesn't happen again. major_events: answer: I did not see any changes to the team or to surrounding subsystems in my research that would indicate any inherent problems. events: - date: name: - date: name: question: | Please record any major events you found in the history of this vulnerability. Was the code rewritten at some point? Was a nearby subsystem changed? Did the team change? The event doesn't need to be directly related to this vulnerability, rather, we want to capture what the development team was dealing with at the time. curation_level: 1 CWE_instructions: | Please go to cwe.mitre.org and find the most specific, appropriate CWE entry that describes your vulnerability. (Tip: this may not be a good one to start with - spend time understanding this vulnerability before making your choice!) bounty_instructions: | If you came across any indications that a bounty was paid out for this vulnerability, fill it out here. Or correct it if the information already here was wrong. Otherwise, leave it blank. interesting_commits: answer: commits: - note: | I found this commit particularly interesting because it doesn't attempt to do any unit testing, or verify if it works. The commit was undone by someone else shortly after, most likely because that commit was not successful. This is interesting because it was clear that the fix did not work, and it appears it took manual testing to verify, as no unit test file was committed. It would seem prudent to create one when fixing a vulnerability that previously had no unit tests. commit: 1fb24aba4b29b7cd1b6880d8f0b08196a12efc2c - note: commit: question: | Are there any interesting commits between your VCC(s) and fix(es)? Write a brief (under 100 words) description of why you think this commit was interesting in light of the lessons learned from this vulnerability. Any emerging themes? If there are no interesting commits, demonstrate that you completed this section by explaining what happened between the VCCs and the fix. curated_instructions: | If you are manually editing this file, then you are "curating" it. Set the entry below to "true" as soon as you start. This will enable additional integrity checks on this file to make sure you fill everything out properly. If you are a student, we cannot accept your work as finished unless curated is set to true. upvotes_instructions: | For the first round, ignore this upvotes number. For the second round of reviewing, you will be giving a certain amount of upvotes to each vulnerability you see. Your peers will tell you how interesting they think this vulnerability is, and you'll add that to the upvotes score on your branch. announced_instructions: | Was there a date that this vulnerability was announced to the world? You can find this in changelogs, blogs, bug reports, or perhaps the CVE date. A good source for this is Chrome's Stable Release Channel (https://chromereleases.googleblog.com/). Please enter your date in YYYY-MM-DD format. fixes_vcc_instructions: | Please put the commit hash in "commit" below (see my example in CVE-2011-3092.yml). Fixes and VCCs follow the same format. description_instructions: | You can get an initial description from the CVE entry on cve.mitre.org. These descriptions are a fine start, but they can be kind of jargony. Rewrite this description in your own words. Make it interesting and easy to read to anyone with some programming experience. We can always pull up the NVD description later to get more technical. Try to still be specific in your description, but remove Chromium-specific stuff. Remove references to versions, specific filenames, and other jargon that outsiders to Chromium would not understand. Technology like "regular expressions" is fine, and security phrases like "invalid write" are fine to keep too. |
See a mistake? Is something missing from our story? We welcome contributions! All of our work is open-source and version-controlled on GitHub. You can curate using our Curation Wizard.
Hover over an event to see its title.
Click on the event to learn more.
Filter by event type with the buttons below.
