We post a newsletter-y update quarterly on security-dev@chromium.org. It's an open list, so subscribe if you're interested in updates, discussion, or feisty rants related to Chromium security. Q3 2020Greetings, Here's an update on what the teams in Chrome Security have been up to in the third quarter of 2020. The Chrome Safe Browsing team continued the roll-out of Enhanced Safe Browsing by launching it on Android in Chrome 86, and releasing a video with background on the feature. We also launched deep scanning of suspicious downloads, initially for users of Google’s Advanced Protection program, which received positive coverage. This quarter the Usable Security team vanquished a longtime foe: http:// subresources on https:// pages. Mixed content is either upgraded to https:// or blocked. We also built new warnings for mixed forms and continued rolling out mixed download blocking. These launches protect users’ privacy and security by decreasing plaintext content that attackers can spy on or manipulate. In Chrome 86, we are beginning a gradual rollout of a new low-confidence warning for lookalike domains. We also expanded our existing lookalike interstitial. Finally, we rolled out a 1% Chrome 86 experiment to explore how simplifying the URL in the address bar can improve security outcomes. The Platform Security team continued to move forward on memory safety: With Rust currently not approved for use in Chromium, we must try to improve C++. Toward that end, the PDFium Oilpan and MiraclePtr/*Scan projects are moving forward quickly and ready to try in Q4 and Q1 2021. In Sandboxing news, we made changes to Linux and our calling code to handle coming glibc changes, servicifying the Certificate Verifier (unblocking work to isolate the network service), and getting a better grip on Mojo. Bugs-- has started encouraging Chrome developers to submit vulnerability analysis after the bug is fixed (example). This guides our future work on eliminating common bug patterns. We cross-collaborated with fuzzing teams across Google to host 50 summer interns, with strong impact across Chrome and other critical open source software (see blog post). We have added automated regression testing of past fixed crashes for engine-based fuzzers (e.g. libFuzzer, AFL). We have made several changes to our underlying fuzzing and build infrastructures - UI improvements, Syzkaller support, OSS-Fuzz builder rewrite, etc. Lastly, we continue to push fuzzing research across the industry using our FuzzBench benchmarking platform and have led to improvements in AFL++, libFuzzer and Honggfuzz fuzzing engines. The Open Web Platform security team continues to focus on two problems: injection attacks, and isolation primitives. Regarding injection, we're polishing our Trusted Types implementation, supporting Google's security team with bug fixes as they continue to roll it out across Google properties. We're following that up with experimental work on a Sanitizer API that's making good progress, and some hardening work around policy inheritance to fix a class of bugs that have cropped up recently. For isolation, we're continuing to focus on COOP deployment. We shipped COOP's report-only mode as an origin trial, and we're aiming to re-enable SharedArrayBuffers behind COOP+COEP in Chrome 88 after shipping some changes to the process model in Chrome 87 to enable `crossOriginIsolated`. In Q3, Chrome's Security Architecture team has enabled CORS for extension content scripts in Chrome 85, moving to a more secure model against compromised renderers. We made further progress on opt-in origin isolation, and we took the first steps towards several improved process model abstractions for Chrome. MiraclePtr work is progressing towards experiments, and we wrapped up the test infrastructure improvements from last quarter. The CA/Browser Forum guidelines got big updates, with ballots to overhaul the guidelines to better match browser requirements, including certificate lifetimes, and long overdue cleanups and clarifications. One good revamp deserves another, and the Chrome Root Certificate Policy got a big facelift, as part of transitioning to a Chrome Root Store. CT Days 2020 was held in September, including the big announcement that Chrome was working to remove the One Google Log requirement by implementing SCT auditing. This summer, we also hosted an intern who worked on structure-aware ASN.1 fuzzing, and began integration with BoringSSL. Cheers, Andrew, on behalf of the Chrome security team Q2 2020Greetings, The 2nd quarter of 2020 saw Chrome Security make good progress on multiple fronts, all helping to keep our users, and the web safe. The Chrome Safe Browsing team launched real-time phishing protection for all Android devices, and observed a 164% increase in phishing warnings for main-frame URLs. We also completed the rollout of Enhanced Safe Browsing to all users of Chrome on desktop platforms. We helped the Chrome for iOS team implement hash-based Safe Browsing protection in Chrome 84 for iOS for the first time ever. Also working with various teams, most notably the Mobile UX, we made significant progress in shipping Enhanced Safe Browsing in Chrome 86 for Android. For desktop platforms, we landed changes to the in-browser phishing detection mechanism to help reduce phishing false negatives using new machine learning models for Chrome 84 and beyond. We also finalized the plan to disable more malicious Chrome Extensions, starting with Chrome 85. The Enamel team put the finishing touches on our work to prevent https:// pages from loading insecure content. We built a new warning for https:// pages with forms targeting insecure endpoints, and prepared to start rolling out mixed download warnings in Chrome 84. This release will also include mixed image autoupgrading and the second phase of TLS 1.0/1.1 deprecation. Even on an https:// website, users need to accurately understand which website they’re visiting. We expanded our lookalike domain warning with new triggering heuristics, and prepared to launch an additional warning (pictured) for lower-precision heuristics in M86. The Platform Security team continued to make good progress on many of our longer term projects, including sandboxing the network service (and associated certificate verification servicification), adopting Oilpan garbage collection in PDFium's XFA implementation, and investigating memory safety techniques, and exploitation mitigation technologies. Along with our colleagues in Chrome Security Architecture, we've sharpened the security focus on Mojo, Chrome's IPC system, and started looking at what's needed to improve developer ergonomics and make it easier to reason about communicating over security boundaries. Also with CSA, we've worked on how MiraclePtr could help prevent use after free bugs in C++ code. Bugs-- continued to develop and improve the FuzzBench platform which has helped the security research community develop more efficient fuzzing engines (HonggFuzz, AFL++ got several improvements and leads the benchmarking results). Based on FuzzBench results, we have successfully integrated Entropic as a fuzzing strategy in ClusterFuzz. We have started rewriting/improving several Chrome blackbox fuzzers (e.g. dom, webbot, media, ipc), and also deprecated ~50 duplicate/unneeded fuzzers. In OSS-Fuzz service, we added first-class fuzzing support for Golang and Rust languages (better compiler instrumentation, crash parsing, and easier project integration) and improved CI (e.g. Honggfuzz checks). Lastly, we worked closely with Android Security and improved ClusterFuzz for on-device and host fuzzing use cases (e.g. syzkaller support, pixel hardware fuzzing). The Open Web Platform Security team remained focused on mitigating injection attacks on the one hand, and improving isolation of sensitive content on the other. Q2 was exciting on both fronts! We shipped an initial implementation of Trusted Types, which gives developers the ability to meaningfully combat DOM XSS, and nicely compliments CSP's existing mitigations against other forms of injection. Google has deployed Trusted Types in high-value applications like My Google Activity, and we're excited about further rollouts. The Chrome Security Architecture team has started Origin Trials for opt-in origin isolation, allowing origins to use separate processes from the rest of their site. We have also made progress on securing extension content script requests and enforcements for request initiators, and we improved the update mechanism for Android Site Isolation's list of isolated sites. Much of Q2 was spent on cleanup and documentation, though, particularly test infrastructure and flaky test improvements. Finally, we also contributed to MiraclePtr efforts to reduce memory bugs, and we helped more teams use WebUI by adding support for web iframes. In the world of the Web PKI, TLS certificates issued from default-trusted CAs after 2020-09-01 will be rejected if their lifetime is greater than 398 days, beginning with Chrome 85. See the documentation and FAQ. This is part of a number of changes adopted by CA/Browser Forum with unanimous support from major Browsers, which aligns the Baseline Requirements with many existing Browser root program requirements. We continued informal cross-browser collaboration and met with the European Union on their eIDAS Regulation, exploring how certificates can be used to provide identity information for domains in a manner consistent with the Web Platform. Until next time, on behalf of Chrome Security, I wish you all the very best. Andrew Q1 2020Greetings, Amongst everything the first quarter of 2020 has thrown at the world, it has underlined the crucial role the web plays in our lives. As always, the Chrome Security teams have been focusing on the safety of our users, and on keeping Chrome secure and stable for all those who depend on it. The Chrome Safe Browsing team, with the support of many teams, introduced a new Safe Browsing mode that users can opt-in to get “faster, proactive protection against dangerous websites, downloads, and extensions.” We launched previously announced faster phishing protection to Chrome users on high-memory Android devices. This led to a 116% increase in the number of phishing warnings shown to users for main frame URLs. We also launched predictive phishing protections to all users of Chrome Password Manager on Android, which warns users when they type their saved password on an unsafe website. The initial estimate from the launch on Beta population suggests an 11% increase in the number of warnings shown compared to that on Windows. Chrome's Enamel team finalized plans to bring users a more secure HTTPS ecosystem by blocking mixed content, mixed downloads, and legacy TLS versions. These changes have now been delayed due to changing global circumstances, but are still planned for release at the appropriate time. To improve how users understand website identity, we experimented with a new security indicator icon for insecure pages. We also experimentally launched a new warning for sites with spoofy-looking domain names. We’re now analyzing experiment results and planning next steps for these changes. The Platform Security team made significant forward progress on enabling the network service to be sandboxed on all platforms (it already is on macOS). This required getting significant changes into Android R, migrating to a new way of using the Data Protection API on Windows (which had the side-effect of breaking some crime rings’ operations, albeit temporarily), and more. When complete, this will reduce the severity of bugs in that service from Critical to High. We also made progress on Windows sandboxing, working towards adopting AppContainer, and are refactoring our Linux/Chrome OS sandbox to handle disruptive upstream changes in glibc and the kernel. Discussions about the various ways we can improve memory safety continue, and we laid plans to migrate PDFium’s XFA support to Oilpan garbage collection, with the help of Oilpan and V8 teams. This will enable us to safely ship XFA in production, hopefully in 2020. The bugs-- team launched FuzzBench, a fuzzer benchmarking platform to bridge the gap between academic fuzzing research and industry fuzzing engines (e.g libFuzzer, AFL, Honggfuzz). We have integrated new techniques in ClusterFuzz to improve fuzzing efficiency and break coverage walls - dataflow trace based fuzzing, in-process grammar mutators (radamsa, peach). Also, launched CIFuzz for OSS-Fuzz projects to catch obvious security regressions in a project’s continuous integration before they are checked in. The Chrome Security Architecture (née Site Isolation) team has been strengthening Site Isolation this quarter. We're securing extension content script requests to unify CORS and CORB behavior, and we're progressing with a prototype to let websites opt in to origin-level isolation. To improve Chrome's security architecture, the team is working on a proposal for a new SecurityPrincipal abstraction. We have also cleaned up RenderWidget/RenderView lifetimes. Finally, we are starting to formalize our thinking about privilege levels and their interactions in Chrome. We are enumerating problem spots in IPC and other areas as we plan the next large projects for the team. For the past five years, Chrome, along with counterparts at browser vendors such as Mozilla, Microsoft, Apple, Opera, and Vivaldi, have been discussing technical challenges involved in the eIDAS Regulation with members of the European Commission, ETSI, and European Union Agency for Cybersecurity (ENISA). These discussions saw more activity this past quarter, with browsers publicly sharing an alternative technical proposal to the current ETSI-defined approach, in order to help the Commission make the technology easier to use and interoperate with the web and browsers. We announced Chrome’s 2020 Certificate Transparency plans with a focus on removing “One Google Log” policy dependency. Pending updates to travel policy, we have tentatively planned CT Days 2020 and sent out an interest survey for participants. Until next time, on behalf of Chrome Security I wish you all the very best. Andrew Q4 2019As we start 2020 and look forward to a new year and a new decade, the Chrome Security Team took a moment to look back at the final quarter of 2019. The Safe Browsing team launched two features that significantly improve phishing protections available to Chrome users: We reduced the false negative rate for Safe Browsing lookups in Chrome by launching real-time Safe Browsing lookups for users who have opted in to “Make Searches and Browsing better.” Early results are promising, with up to 55% more warnings shown to users who had this protection turned on, compared to those who did not. A while ago we launched predictive phishing protections to warn users who are syncing history in Chrome when they enter their Google Account password into suspected phishing sites that try to steal their credentials. With the Chrome 79, we expanded this protection to everyone signed in to Chrome, even if you have not enabled Sync. In addition, this feature will now work for all the passwords that the user has stored in Chrome’s password manager; this will show an estimated 10 times more warnings daily. We also had two telemetry based launches for sending pings to Safe Browsing when users who have opted into Safe Browsing Extended Reporting focus on password fields and reuse their passwords on Android. HTTPS adoption has risen dramatically, but many https:// pages still include http:// subresources — known as mixed content. In October, the Usable Security team published a plan to eradicate mixed content from the web. The first phases of this plan started shipping in Chrome 79. In Chrome 79, we relocated the setting that allows users to load mixed content when it’s blocked by default. This setting used to be a shield icon in the Omnibox, and is now available in Site Settings instead. In Chrome 80, mixed audio and video will be automatically upgraded to https://, and they will be blocked if they fail to load. We started work on a web standard to codify these changes. See this article for how to fix mixed content if you run an affected website. Website owners should keep their HTTPS configurations up-to-date with the latest security settings. Back in 2018, we (alongside other browsers) announced plans to remove support for legacy TLS versions 1.0 and 1.1. In October, we updated these plans to announce the specific UI treatments that we’ll use for this deprecation. Starting in January 2020, Chrome 79 will label affected websites with a “Not Secure” chip in the omnibox. Chrome 81 will show a full-page error. Make sure your server supports TLS >=1.2 to avoid this warning treatment. To continue to polish our security UI, we iterated on our warning for lookalike domains to make the warning more understandable. We introduced a new gray triangle icon for http:// sites to make a clearer distinction between http:// and https://. This icon will appear for some users as part of a small-scale experiment in Chrome 80. Finally, we cleaned up a large backlog of low severity security UI vulnerabilities. We fixed, closed, or removed visibility restrictions on 33 out of 42 bugs. The Platform Security Team sandboxed the network service on macOS in Chrome 79, and continued the work on sandboxing it on other Desktop platforms. There is also some forward momentum for reducing its privilege in version R of Android. You can now check the sandboxing state of processes on Windows by navigating to chrome://sandbox. Also on Windows, we experimented with enabling the renderer App Container but ran into crashes likely related to third party software, and are now working to improve error reporting to support future experimentation. Chrome 79 also saw Code Integrity Guard enabled on supported Windows versions, blocking unsigned code injection into the renderer process. We have also begun investigating new systemic approaches to memory unsafety. Look for news in 2020, as well as continual improvements to the core libraries in Chromium and PDFium. In Q4, the Bugs-- team moved closer to our goal of achieving 50% fuzzing coverage in Chrome (it's currently at 48%). We added new features to our ClusterFuzz platform, such as Honggfuzz support, libFuzzer support for Android, improved fuzzer weights and more accurate statistics gathering pipeline. We also enabled several new UBSan features across both Chrome and OSS-Fuzz. As part of OSS-Fuzz, we added Go language support and on-boarded several new Go projects. We also gave a talk about ClusterFuzz platform at Black Hat Europe. In conversation with our friends and colleagues at Mozilla over the course of Q4, the Open Web Platform Security team made substantial progress on Cross-Origin-Opener-Policy and Cross-Origin-Embedder-Policy. These isolation primitives will make it possible for us to ensure that process isolation is robust, even as we ship new and exciting APIs that give developers more capability. Implementations of both are mostly complete behind a flag, and we're looking forward to getting them out the door, and beginning the process of relying upon them to when deciding whether to allow cross-thread access to shared memory. Similarly, we're polishing our implementation of Trusted Types based on feedback from origin trials and other vendors' review of the spec. We're still excited about its potential for injection mitigation, and we're looking forward to closing out the last few issues we know about in our implementation. The Site Isolation team posted to the Google Security Blog and the Chromium Blog about our recent milestones for Site Isolation on Android and defending against compromised renderer processes. We also gave a talk at Black Hat Europe about Site Isolation and how to look for new bypasses for the VRP. At the same time, we made progress on additional enforcement, and we ran experiments to expand Android coverage to more devices. Finally, we also used Q4 to clean up a lot of core Site Isolation code, and we started updating Chrome's WebUI framework to better support new types of Chrome features without large risks of privilege escalation. In the world of Web PKI Security, as part of our ongoing collaboration with Microsoft and Mozilla on the Common CA Database, "Audit Letter Validation" is now enabled for the full set of publicly trusted Certificate Authorities. This tool, developed by Microsoft and Mozilla, automatically validates the contents of audit letters to ensure they include the information required of a publicly trusted CA. Audit letter validation was previously done by hand, which was not scalable to CA's 2,500+ intermediate certificates. Audit Letter Validation enabled us and other root stores to detect a wide variety of issues in the Web PKI that had previously gone unnoticed. We’ve spent the past quarter leading the incident response effort, working with non-compliant CAs to remediate issues and mitigate future risk. This helps not only Chrome users, but all users who trust these CAs. We can now automatically detect issues as they happen, ensuring prompt remediation. We also collaborate with Mozilla to provide detailed reviews of organizations applying to be CAs, completing several in Q4. These public reviews take an extremely detailed look at how the CA is operated, looking at both compliance and for risky behaviour not explicitly forbidden, as well as opportunities for improvement based on emerging good practices. Certificate Transparency (CT) continues to be an integral part of our work. Beyond helping protect users by allowing quick detection of potentially malicious certificates, the large-scale analysis that CT enables has been essential in helping improve the Web PKI. Analysis of CT logs this quarter revealed a number of systemic flaws in how Extended Validation certificates are validated, which has spurred industry-wide effort to address these issues. We took steps to protect users from trusting harmful certificates that might be installed by software or which they might be directed to install. Working with the Enamel team, we built on steps we’d previously taken to protect users from certificates used to intercept their communications by adding the ability to rapidly deploy targeted protections via our CRLSet mechanism. CRLSets allow us to quickly respond, using the Component Updater, without requiring a full Chrome release or respin. More generally, we continue to work on the “patch gap”, where security bug fixes are posted in our open-source code repository but then take some time before they are released as a Chrome stable update. We now make regular refresh releases every two weeks, containing the latest severe security fixes. This has brought down the median “patch gap” from 33 days in Chrome 76 to 15 days in Chrome 78, and we continue to work on improving it. Finally, you can read what the Chrome (and other Google) Vulnerability Rewards Programs have been up to in 2019 in our recent blog post. Cheers, Andrew, on behalf of the Chrome security team Q3 2019Greetings! With the equinox behind us, it's time for an update on what the Chrome security team has been up to in the third quarter of 2019. The Chrome Safe Browsing team launched Stricter Download Protections for Advanced Protection users in Chrome and significantly reduce users’ exposure to potentially risky downloads. In Q3, Safe Browsing also brought Google password protection to signed in, non-sync users. This project is code complete, and the team plans to roll it out in Chrome 79. Enamel, the Security UX team, have been looking at mixed content: http:// subresources on https:// pages. Mixed content presents a confusing UX and a risk to user security and privacy. After a long-running data-gathering experiment on pre-stable channels, the Enamel team publicized plans to start gradually blocking mixed content. In Chrome 79, the team plans to relocate the setting to bypass mixed content blocking from a shield icon in the omnibox to Site Settings. In Chrome 80, we will start auto-upgrading mixed audio and video to https://, blocking resources if they fail to auto-upgrade. Chrome 80 will also introduce a “Not Secure” omnibox chip for mixed images, which we plan to start auto-upgrading in a future version of Chrome.
In Q3, Enamel also made improvements to our lookalike domain warning, with clearer strings and new heuristics for detecting spoofing attacks. We also added additional signals in our Suspicious Site Reporter extension for power users to identify suspicious sites that they can report to Safe Browsing for scanning. In Chrome 77, we relocated the Extended Validation certificate UI to Page Info; we presented the user research that inspired this change at USENIX Security 2019. The Platform Security team continues to help improve the memory safety of the PDFium code base, and have finished removing all bare new/delete pairs, and ad-hoc refcounting. We continued to push for greater memory safety on a number of fronts, and are busy working on plans for the rest of the year and 2020. Q3 saw a number of projects enter trials on Beta and Stable, including the V2 sandbox for GPU process and network service sandbox on macOS, and Code Integrity Guard on Windows. Look out for news of their launch in next quarter's update! The XSS Auditor, which attempted to detect and prevent reflected XSS attacks, was removed in Chrome 78. It had a number of issues, and in the end the cons outweighed the pros. The Bugs-- team added FuzzedDataProvider (FDP) as part of Clang, making it simple to write fuzz targets that require multiple inputs with just a single header file include. We refactored ClusterFuzz code to make it easier to add new fuzzing engines and migrated libFuzzer to use this new interface. We rewrote the ClusterFuzz reproduce tool, which is now part of main ClusterFuzz GitHub repo. On the OSS front, we launched new features in OSS-Fuzz - Golang support, X86 config support, FDP support, and OSS-Fuzz Badges. We also did fuzzer strategy weight adjustments based on multi-armed bandit experiments. Jonathan Metzman presented at Black Hat (USA) on structure aware fuzzing. The Open Web Platform Security team have been working on Trusted Types, the Origin Trial for which is about to finish. We are making a number of changes to the feature, mainly to aid deployment and debugging of TT deployments, as well as some overall simplifications. We expect this work to finish in early Q4, and to launch in the same quarter. The Site Isolation team reached two more important milestones in Q3. First, we enabled Site Isolation for password sites on Chrome for Android (on devices with at least 2GB of memory), bringing Spectre mitigations to mobile devices! Second, we added enough compromised renderer protections on Chrome for Desktop to include cross-site data disclosure to the Chrome VRP! We're very excited about the new protections, and we continue to improve the defenses on both Android and Desktop. Separately, we presented our USENIX Security paper in August and launched OOPIF-based PDF support, clearing the way to remove BrowserPlugin. In the Web PKI space, the government of Kazakhstan recently created a Root CA and with local ISPs engaged in a campaign to encourage all KZ citizens to install and trust the CA. Ripe Atlas detected this CA conducting a man-in-the-middle on social media. Chrome blocked this certificate to prevent it from being used for MITMing Chrome users. In conjunction with several other major browsers, we made a joint PR statement against this type of intentional exploitation of users. Following this incident, we began working on a long-term solution to handling MITM CAs in Chrome. In hacker philanthropy news, in July we increased the amounts awarded to security researchers who submit security bugs to us under the Chrome Vulnerability Reward Program. The update aligned both categories and amounts with the areas we'd like researchers to focus on. This generated some good press coverage which should help spread the word about the Chrome VRP. Tell your friends, and submit your Chrome security bugs here and they'll be considered for a reward when they're fixed! In Chrome security generally we've been working to address an issue called the “patch gap”, where security bug fixes are posted in our open-source code repository but then take some time before they are released as a Chrome stable update. During that time, adversaries can use those fixes as evidence of vulnerabilities in the current version of Chrome. To reduce this problem, we’ve been merging more security fixes directly to stable, and we’re now always making a security respin mid-way through the six-week development cycle. This has reduced the median patch gap from ~33 days in Chrome 76 to ~19 days in Chrome 77. This is still too long, and we’re continuing to explore further solutions. Cheers, Andrew, on behalf of the Chrome security team
Q2 2019Greetings, With 2019 already more than 58% behind us, here's an update on what Chrome Security was up to in the second quarter of this year. Chrome SafeBrowsing is launching stricter download protections for Advanced Protection users, and a teamfood has begun to test the policy in M75. This will launch broadly with M76. This significantly reduces an Advanced Protection user’s exposure to potentially risky downloads by showing them warnings when they try to download “risky” files (executable files that haven’t been vetted by SafeBrowsing) in Chrome. Users need to understand site identity to make safe decisions on the web. Chrome Security UX published a USENIX Security paper exploring how users understand modern browser identity indicators. To help users understand site identity from confusing URLs, we launched a new warning detecting domains that look similar to domains you’ve visited in the past. We published a guide to how we triage spoofing bugs involving such domains. We also built a Suspicious Site Reporter extension that power users can use to report deceptive sites to Google’s Safe Browsing service, to help protect non-technical users who might not be able to discern a deceptive site’s identity as well. Site identity is meaningless without HTTPS, and we continue to promote HTTPS adoption across the web. We implemented an experimental flag to block high-risk nonsecure downloads initiated from secure contexts. And we continued to roll out our experiment that auto-upgrades mixed content to HTTPS, pushing to 10% of beta channel and adding new metrics to quantify breakage. In addition to helping with the usual unfaltering flow of security launch reviews, Platform Security engineers have been continuing to investigate ways to help Chrome engineers create fewer memory safety bugs for clusterfuzz to find. While performance is a concern when adding checks to libraries, some reports of regressions nicely turned out to be red herrings. On macOS, Chrome executables are now signed with the hardened runtime options enabled. Also on macOS, the change to have Mojo use Mach IPC, rather than POSIX file descriptors/socket pairs, is now fully rolled out. On Windows, we started to enable Arbitrary Code Guard on processes that don't need dynamic code at runtime. We've done a lot of analysis on the types of security bugs which are still common in Chromium. The conclusion is that memory safety is still our biggest problem, so we've been working to figure out the best next steps to solve that—both in terms of safer C++, and investigating other choices to find if we can parse data in a safe language without disrupting the Chromium development environment too much. We've also been looking at how security fixes are released, to ensure fixes get to our users in the quickest possible way. We have also improved some of the automatic triage that Clusterfuzz does to make sure that bugs get the right priority. To augment our fuzzing efforts and find vulnerabilities for known bad patterns, we have decided to invest in static code analysis efforts with Semmle. We have written our custom QL queries and reported 15 bugs so far (some of these were developed in collaboration with Project Zero). We have made several changes to improve fuzzing efficiency which include - leveraging DFSan for focused mutations, added support for custom mutators, build-type optimizations (sanitizers without instrumentation) and libFuzzer fork mode on Windows. We have upstreamed a helper module in libFuzzer to make it easy to split fuzz input and decrease fuzz target complexity. The Open Web Platform Security team was mainly focused on Trusted Types, and conducted an Origin Trial for the feature in Q2. The team is presently scrambling to address the issues raised by public feedback, to modify the feature to make it easier to deploy, and to generally make Trusted Types fit for a full launch. The Site Isolation team published their Usenix Security 2019 paper about the desktop launch (Site Isolation: Process Separation for Web Sites within the Browser), which will be presented in August. We now have a small Stable channel trial of Android Site Isolation, which isolates the sites that users log into rather than all sites. That work included persisting and clearing the sites to isolate, fixing text autosizing, and adding more metrics. Separately, we ran a trial of isolating origins rather than sites to gauge overhead, and we helped ship Sec-Fetch-Site headers. We also started collecting data on how well CORB is protecting sensitive resources in practice, and we've started launch trials of out-of-process iframe based PDFs (which adds CORB protection for PDFs). The Chrome OS Security team has been working on the technology underlying Chrome OS verified boot. Going forward, dm_verity will use SHA256 as its hashing algorithm, replacing SHA1. So long, weak hashing algorithm! We also spent some time making life easier for Chrome OS developers. Devs now have access to a time-of-check-time-of-use safe file library, and a simplified mechanism for building system call filtering policies. Cheers, Andrew, on behalf of the Chrome security team Q1 2019Greetings, Here's an update on what Chrome Security was up to in the first quarter of 2019! The Site Isolation team finished the groundwork for Android Beta Channel field trials, and the trials are now in progress. This Android mode isolates a subset of sites that users log into, to protect site data with less overhead than isolating all sites. We also started enforcing Cross-Origin Read Blocking for extension content script requests, maintaining a temporary allowlist for affected extensions that need to migrate. We tightened compromised renderer checks for navigations, postMessage, and BroadcastChannel. We also continued cross-browser discussions about Long-Term Web Browser Mitigations for Spectre, as well as headers for isolating pages and enabling precise timers. Finally, we are close to migrating PDFs from BrowserPlugin to out-of-process iframes, allowing BrowserPlugin to be deleted. In the last several years, the Usable Security team have put a lot of effort into improving HTTPS adoption across the web, focusing on getting top sites to migrate to HTTPS for their top-level resources. We’re now starting to turn our attention to insecure subresources, which can harm user security and privacy even if the top-level page load is secure. We are currently running an experiment on Canary, Dev, and Beta that automatically upgrades insecure subresources on secure pages to HTTPS. We also collected metrics on insecure downloads in Q1 and have started putting together a proposal to block high-risk insecure downloads initiated from secure pages. People need to understand website identity to make good security and trust decisions, but lots of research suggests that they don’t. We summarized our own research and thinking on this topic in an Enigma 2019 talk. We open-sourced a tool that we use to help browser developers display site identity correctly. We also published a set of URL display guidelines and subsequently incorporated them into the URL standard. The Safe Browsing team increased the coverage against malware and unwanted software downloads by changing the logic of which file types to check against Safe Browsing. We flipped the heuristic to an allow-list of known-safe file extensions, and made the rest require verification. This adds protection from both the uncommon file extensions (where attackers convince users to rename them to a common executable after scanning), and from Office document types where the incidence of malware has increased significantly. The Chrome Cleanup Tool is now in the Chromium repository! This lets the public audit the data collected by the tool, which is a win for user privacy, and gives an example of how to sandbox a file scanner. The open source version includes a sample scanner that detects only test files, while the version shipped in Chrome will continue to depend on internal resources for a licensed engine. The Bugs-- team has open sourced ClusterFuzz, a fuzzing infrastructure that we have been developing over the past 8 years! This army of robots has found 30,000+ bugs in Chrome and 200+ open source projects. To improve the efficiency of our cores, we have developed automated fuzzer weights management based on fuzzer quality/freshness/code changes. Additionally, we have developed several new WebGL fuzzers (some of them leverage GraphicsFuzz) and found 63 bugs. We have significantly scaled up fuzzing Chrome on Android (x86) by using Cuttlefish over GCE. Lastly, we have transitioned Chrome code coverage tools development to Chrome Infra team, see the new dash here. The Platform Security team added some checks for basic safety to our base and other fundamental libraries, and are investigating how to do more while maintaining efficiency (run-time, run space, and object code size). We hope to continue to do more, as well as investigate how to use absl without forgoing the safety checks. We’ve been having great success with this kind of thing in PDFium as well, where we’ve found that the compiler can often optimize away these checks, and investigating where it hasn’t been able to has highlighted several pre-existing bugs. On macOS, we have re-implemented the Mojo IPC Channel under the hood to use Mach IPC, which should help reduce system resource shortage crashes. This also led to the development of two libprotobuf-mutator (LPM) fuzzers for Mach IPC servers. We’re working on auto-generating an LPM based fuzzer from Mojo API descriptions to automatically fuzz Mojo endpoints, in-process. We also continue to write LPM fuzzers for tricky-to-reach areas of the code like the disk cache. We are also investigating reducing the privilege of the network process on Windows and macOS. Our next update will be the first full quarter after joining Chrome Trust and Safety. We're looking forward to collaborating with more teams who are also working to keep our users safe! Cheers, Andrew on behalf of Chrome Security Q4 2018Greetings, With the new year well underway, here's a look back at what Chrome Security was up to in the last quarter of 2018. In our quest to make HTTPS the default, we started marking HTTP sites with a red Not Secure icon when users enter data into forms. This change launched to stable in Chrome 70 in October. A new version of the HTTPS error page also launched to the stable channel as an experiment: it looks the same but is much improved under the hood. We built a new version of the HTTPS Transparency Report for top sites; the report now displays aggregate statistics for the top sites instead of individual sites. We also built a new interstitial warning to notify Chrome users of unclear mobile subscription billing pages. The new warning and policy launched in Chrome 71. The Bugs-- team ported libFuzzer to work on Windows, which was previously lacking coverage guided fuzzing support, and this resulted in 93 new bugs. We hosted a month-long Fuzzathon in November, focused on improving fuzz coverage for Chrome’s browser process and Chrome OS. This effort led to 85 submissions and 157 bugs. We have added more automation towards auto-adjusting cpu cycles allocated to various fuzzers based on code coverage changes and recency of fuzzer submission. Lastly, we added Linux x86 fuzzing configurations (1, 2) for libFuzzer, which resulted in 100 new bugs. In Platform Security, we started sandboxing the network service on macOS. On Windows, we’re starting to experiment with an improved GPU sandbox. The network service has the beginnings of a sandbox on Windows, and we’ll be working on tightening it in future work. We’re also continuing to gradually harden the implementations of core Chromium libraries in base/ and elsewhere. We had a great adventure finding and fixing bugs in SQLite as well, including an innovative and productive new fuzzer. We’re continuing to hammer away at bugs in PDFium, and refactoring it significantly. To help sites defend against cross-site scripting (XSS), we are working on Trusted Types. This aims to bring a derivative of Google's "Safe HTML Types" — which relies on external tooling that may be incompatible with existing workflows or code base — directly into the web platform, thus making it available to everyone. Both Google-internal and external teams are presently working on integrating Trusted Types into existing frameworks which, if successful, offers the chance to rapidly bring this technique to large parts of the web. Chrome 73 will see an origin trial. The work on Site Isolation continues as we focus on enabling it on Android — support for adding isolated origins at runtime, fixing issues with touch events, and balancing process usage for maximizing stability. We added improvements to CORB to prevent bypasses from exploited renderers, we announced extensions changes for content script requests, and we reached out to affected authors with guidance on how to update. Additionally, we continue to add more enforcements to mitigate compromised renderers, which is the ultimate end goal of the project. Last but not least, we have worked to improve code quality and clean up architectural deficiencies which accumulated while developing the project. Chrome OS 71 saw the initial, limited release of USBGuard, a technology that improves the security of the Chrome OS lock screen by (carefully) blocking USB devices on the lock screen. As ever, many thanks to all those in the Chromium community, and our VRP reporters, who help make the Web more secure! Cheers, Andrew, on behalf of the Chrome security team Q3 2018Greetings! Chrome turned 10 in September! Congrats to the team on a decade of making the web more secure. In the quest to find security bugs, the Bugs-- team incorporated Machine Learning in ClusterFuzz infrastructure using RNN model to improve upon corpus quality and code coverage. We experimented with improving fuzzing efficiency by adding instability handling and mutation stats strategies inside libFuzzer. We added a new Mojo service fuzzer by extending the Mojo javascript bindings and found security bugs. We also migrated our fuzzing infrastructure to provide Clang Source-based Code Coverage reports and deprecated Sancov. The Platform Security team continued to add hardening and checks to fundamental classes and libraries in base/, and did some of the same work in PDFium and other parsers and interpreters in Chromium. We also provided some sandboxing consulting to other teams for their new services including audio and networking. Chrome on macOS now has a new sandbox architecture, launched in Chrome 69, which immediately initializes when a new process executes. This reduces Chrome’s attack surface and allows better auditing of system resource access between macOS versions. Chrome OS Security wrapped up the response to the L1TF vulnerability, fixes for which enabled shipping Linux apps on Chrome OS without exposing users to extra risk. Moreover, we received an (almost) full-chain exploit for Chrome OS that both validated earlier sandboxing work (like for Shill, Chrome OS’s connection manager) and also shed light on further hardening work that was wrapped up in Q3. Chrome 70 shipped TLS 1.3, although we did have to disable a downgrade check in this release due to a last-minute incompatibility with some network devices. After the excitement enabling Site Isolation by default on desktop platforms in Q2, the team has been focused on building a form of Site Isolation suitable for devices that run Android, which have more limited memory and processing power. We've been fixing Android-specific issues (alongside a lot of maintenance for the desktop launch), we have started field trials for isolating a subset of sites, and we are working on ways to add more sites to isolate at runtime. Separately, we added several more enforcements to mitigate compromised renderers, to extend the protection beyond Spectre. Users should expect that the web is safe by default, and they’ll be warned when there’s an issue. In Chrome 68, we hit a milestone for Chrome security UX, marking all HTTP sites as “not secure”. We continued down that path in Chrome 70, showing the “not secure” string in red when users enter data on an HTTP page. We began stepping towards removing Chrome’s positive security indicators so that the default unmarked state is secure, starting by removing the “Secure” wording in Chrome 69. We would like to experiment with mixed content autoupgrading to simplify (i.e. improve) the user experience, and are currently collecting metrics about the impact. We’re also working to improve Chrome security UX under the hood -- we launched committed HTTPS interstitials on Canary and Dev. As ever, many thanks to all those in the Chromium community, and our VRP reporters, who help make the Web more secure! Cheers, Andrew, on behalf of the Chrome security team Q2 2018Greetings and salutations, It's time for another (rather belated!) update from your friends in Chrome Security, who are hard at work to keep Chrome the most secure platform to browse the Internet. We're very excited that Site Isolation is now enabled by default as a Spectre mitigation in M67 for Windows, macOS, Linux, and Chrome OS users! This involved an incredible number of fixes from the team in Q2 to make out-of-process iframes fully functional, especially in areas like painting, input events and performance, and it included standardizing Cross-Origin Read Blocking (CORB). Stay tuned for more updates on Site Isolation coming later this year, including additional protections from compromised renderers. Chris and Emily talked about Spectre response, Site Isolation, and necessary developer steps at I/O. We also announced that security bugs found in Site Isolation could qualify for higher VRP reward payments for a limited time. In their quest to find security bugs, the Bugs-- team integrated Clang Source-based Code Coverage into Chromium project and launched a dashboard to make it easy for developers to see which parts of the code are not covered by fuzzers and unit tests. We wrote a Mojo service fuzzer that generates fuzzing bindings in JS and found some scary vulnerabilities. We added libFuzzer fuzzing support in Chrome OS and got new fuzz target contributions from Chrome OS developers and found several bugs. We made numerous improvements to our ClusterFuzz fuzzing infrastructure, examples include dynamically adjusting CPU allocation for inefficient fuzz targets until their performance issues are resolved, cross-pollinating corpuses across fuzz targets and projects, and more. The Platform Security team has been working on adding bounds checks and other sanity checks to base/containers, as part of an overarching effort to harden heavily-used code and catch bugs. We’ve had some good initial success and expect to keep working on this for the rest of the year. This is a good area for open source contributors and VRP hunters to work on, too! In our quest to move the web to 100% HTTPS, we prepared for showing Not Secure warnings on all http:// pages which started in M68. We sent Search Console messages to affected sites and expanded our enterprise controls for this warning. We announced some further changes to Chrome’s connection security indicators: in M69, we’ll be removing the Secure chip next to https:// sites, and in M70 we’ll be turning the Not Secure warning red to more aggressively warn users when they enter data on a non-secure page. We also added some features to help users and developers use HTTPS more often. The omnibox now remembers pages that redirect from http:// to https://, so that users don’t get sent to the http:// version in the future. We fixed a longstanding bug with the upgrade-insecure-requests CSP directive that helps developers find and fix mixed content: it now upgrades requests when following redirects. Finally, we added a setting to chrome://flags#unsafely-treat-insecure-origin-as-secure to let developers more easily test HTTPS-only features, especially on Android and ChromeOS. To better protect users from unwanted extensions, we announced the deprecation of inline installations for extensions. This change will result in Chrome users being directed to the Chrome Web Store when installing extensions, helping to ensure user can make a better informed decision. Chrome OS spent a big chunk of Q2 updating and documenting our processes to ensure we can better handle future incidents like Spectre and Meltdown. We expanded our security review guidelines so that they can be used both by security engineers while reviewing a feature, as well as by SWE and PM feature owners as they navigate the Chrome OS launch process. We continued our system hardening efforts by making Shill, the Chrome OS network connection manager, run in a restrictive, non-root environment starting with M69. Shill was exploited as part of a Chrome OS full-chain exploit, so sandboxing it was something that we’ve been wanting to do for a long time. With PIN sign-in launching with M68, the remaining work to make the underlying user credential brute force protection mechanism more robust is underway, and we plan to enable it for password authentication later this year. Hardening work also happened on the Android side, as we made progress on functionality that will allow us to verify generated code on Android using the TPM. Q2 continued to require incident response work on the Chrome OS front, as the fallout from Spectre and Meltdown included several researchers looking into the consequences of speculative execution. The good news is that we started receiving updated microcode for Intel devices and these updates will start to go out with M69. As ever, many thanks to all those in the Chromium community, and our VRP reporters, who help make the Web more secure! Cheers Andrew on behalf of the Chrome Security Team Q1 2018Greetings and salutations, It's time for another update from your friends in Chrome Security, who are hard at work trying to keep Chrome as the most secure platform to browse the Internet. We'd also like to welcome our colleagues in Chrome OS security to this update - you'll be able to hear what they've been up to each quarter going forward. In our effort to find and fix bugs, we collaborated with the Skia team and integrated 21 fuzz targets into OSS-Fuzz for continuous 24x7 fuzzing on Skia trunk. So far, we have found 38 security vulns! We also added several new fuzz targets as part of a 2-week bug bash (e.g. multi-msg mojo fuzzer, audio decoder fuzzer, appcache manifest parsing fuzzer, json fuzzer improvements, etc) and found an additional vulnerability through code review. We added libFuzzer support for Chrome OS and integrated it with ClusterFuzz. Sample puffin fuzzer found 11 bugs (includes 2 security). We made several improvements to AFL fuzzing engine integration and fuzzing strategies. This brings it on-par with libFuzzer in terms of the number of bugs found -- it's now ~3X more productive than before! We added support for building MSan instrumented system libraries for newer debian distros (1, 2). To help users infected with unwanted software, we moved the standalone Chrome Cleanup Tool into Chrome. Scanning and cleaning Windows machines can now be triggered by visiting chrome://settings/cleanup. There was some misunderstanding on Twitter about why Chrome was scanning, which we clarified. We also pointed people to the unwanted software protection section of Chrome's privacy whitepaper so they can understand what data is and isn’t sent back to Google. In our effort to move the web to 100% HTTPS, we announced that Chrome will start marking all HTTP pages with a Not Secure warning in July. This is a big milestone that concludes a multi-year effort to roll out this warning to all non-secure pages. Alongside that announcement, we added a mixed content audit to Lighthouse, an automated tool for improving webpage quality. This audit helps developers find and fix mixed content, a major hurdle for migrating to HTTPS. We also announced the deprecation of AppCache in nonsecure contexts. In addition to MOAR TLS, we also want more secure and usable HTTPS, or BETTER TLS. With that goal in mind, we made changes to get better metrics about features intended to help users with client or network misconfigurations that break their HTTPS connections (like our customized certificate warnings). We also added more of these “helper” features too: for example, we now bundle help content targeted at users who are stuck with incorrect clocks, captive portals, or other configuration problems that interfere with HTTPS. Finally, we started preparing for Chrome’s upcoming Certificate Transparency enforcement deadline by analyzing and releasing some metrics about the state of CT adoption so far. To help make security more usable in Chrome, we’re exploring how URLs are problematic. We removed https/http schemes and www/m subdomains from the steady-state omnibox, and we’re studying the impact of removing positive security indicators that might mislead or distract from the important security information in the origin. Chrome OS Security had a busy Q1. The vulnerabilities known as Meltdown and Spectre were disclosed in early January, and a flurry of activity followed as we rushed to patch older kernels against Meltdown in Chrome OS 66, and incorporated Spectre fixes for ARM Chrome OS devices in Chrome OS 67. We also started codifying our security review guidelines in a HOWTO doc, to allow the larger Chrome OS team to better prepare for security reviews of their features. Moreover, after being bit by symlinks and FIFOs being used as part of several exploit chains, we finally landed symlink and FIFO blocking in Chrome OS 67. On the hardware-backed security front, we've split off the component that allows irreversible once-per-boot decisions into its own service, bootlockboxd. Finally, work is nearing completion for a first shipping version of a hardware-backed mechanism to protect user credentials against brute force attacks. This will allow PIN codes as a new authentication mechanism for Chrome OS meeting our authentication security guidelines, and we'll use it to upgrade password-based authentication to a higher security bar subsequently. Spectre kept us busy on the Chrome Browser side as well. The V8 team landed a large number of JIT mitigations to make Spectre exploits harder to produce, and high resolution timers like SharedArrayBuffer were temporarily disabled; more details on our response here. In parallel, the Site Isolation team significantly ramped up efforts to get Site Isolation launched as a Spectre mitigation, since it helps avoid having data worth stealing anywhere in a compromised process. In Q1, we substantially improved support for the Site Isolation enterprise policies that launched prior to the Spectre disclosure, including:
Thanks to these improvements, we have been running field trials and are preparing to launch the strict Site Isolation policy on desktop. We talked about much of this work at Google I/O. Finally, we continue to work on exploit mitigations and other security hardening efforts. For example, Oilpan, blink's garbage collecting memory management system, removed its inline metadata, which make it more difficult to overwrite with memory corruption bugs. This was the culmination of several years of effort, as performance issues were worked through. In Android P, we refactored the WebView zygote to become a child of the main app_process zygote, reducing memory usage and helping with the performance of future Site Isolation efforts. Members of Platform Security also helped coordinate the response to Spectre and Meltdown, and still managed to find time to conduct their routine reviews of new Chrome features. Q4 2017Greetings and salutations, It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. As it's the start of 2018, we reflected on a year’s worth of security improvements, and announced new stats around our VRP and Safe Browsing warnings. Here are some highlights from the last quarter of 2017: In effort to find and fix bugs, we (Bugs--):
Other than fixing bugs, we (MOAR TLS, Enamel, Safe Browsing) also:
As always, we invest a lot in security architecture and exploit mitigations. Last quarter, we (Platform Security / Site Isolation):
To help users that inadvertently installs unwanted software, we (Chrome Protector):
Lastly, we (BoringSSL) deployed TLS 1.3 to Chrome stable for a couple weeks in December and gathered valuable data. As ever, many thanks to all those in the Chromium community who help make the Web more secure! Cheers Andrew on behalf of the Chrome Security Team Q3 2017Greetings and salutations, It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Give you're reading this, you might well be interested in two whitepapers evaluating enterprise browser security that were released recently. Beyond that, here's a recap from last quarter: Bugs-- team
Enamel, Permissions
MOAR TLS
Chrome Safe Browsing
Platform Security
Site Isolation
As ever, many thanks to all those in the Chromium community who help make the web more secure! Cheers Andrew, on behalf of the Chrome Security Team Q2 2017Greetings and salutations, It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Here’s a recap from last quarter: The Bugs-- team have released a new tool to make ClusterFuzz testcase reproduction easy for developers. Our open source fuzzing efforts (aka OSS-Fuzz) continue to improve the security of the overall web (86 projects, 1859 bugs, see recent blog post here). We have written a new Javascript fuzzer that has filed 102 bugs to date, many with security implications. We also found some interesting vulnerabilities (1, 2, 3) through our code auditing efforts. We integrated the Safe Browsing API with WebView starting in Android O, allowing custom interstitial blocking pages. WebView developers will be able to opt-in to check URLs against Google Safe Browsing’s list of unsafe websites. We understand that sites which repeatedly prompt for powerful permissions often annoy users and generate warning fatigue. Starting in Chrome 59, we’ve started temporarily blocking permission requests if users have dismissed a permission prompt from a site multiple times. We’re also moving forward with plans to deprecate permissions in cross-origin iframes by default. Permission requests from iframes have the potential to mislead users into granting access to content they didn’t intend. The Platform Security team has concluded several years of A/B experimentation on Android, and with Chrome 58 we have turned on the Seccomp-BPF sandbox for all compatible devices. This sandbox filters system calls to reduce the attack surface of the Linux kernel in renderer processes. Currently about 50% of Android devices support Seccomp, and this number is rising at a steady rate. In Chrome 59, you can navigate to about:sandbox to see whether your Android device supports Seccomp. We have migrated PDFium to use PartitionAlloc for most allocations, with distinct partitions for strings, array buffers, and general allocations. In Chrome 61, all three partitions will be active. We continue to work on MOAR+BETTER TLS and announced the next phase of our plan to help people understand the security limitations of non-secure HTTP. Starting in Chrome 62 (October), we’ll mark HTTP pages as “Not secure” when users enter data in forms, and on all HTTP pages in Incognito mode. We presented new HTTPS migration case studies at Google I/O, focusing on real-world site metrics like SEO, ad revenue, and site performance. We experimented with improvements to Chrome’s captive portal detection on Canary and launched them to stable in Chrome 59, to avoid a predicted 1% of all certificate errors that users see. Also, users may restore the Certificate information to the Page Information bubble! Those working on the Open Web Platform have implemented three new Referrer Policies, giving developers more control over their HTTP Referer headers and bringing our implementation in line with the spec. We also fixed a longstanding bug so that site owners can now use upgrade-insecure-requests in conjunction with CSP reporting, allowing site owners to both upgrade and remediate HTTP references on their HTTPS sites. After our launch of --isolate-extensions in Chrome 56, the Site Isolation team has been preparing for additional uses of out-of-process iframes (OOPIFs). We implemented a new --isolate-origins=https://example.com command line flag that can give dedicated processes to a subset of origins, which is an important step towards general Site Isolation. We also prepared the OOPIF-based <webview> field trial for Beta and Stable channels, and we ran a Canary field trial of Top Document Isolation to learn about the performance impact of putting all cross-site iframes into one subframe process. We've been improving general support for OOPIFs as well, including spellcheck, screen orientation, touch selection, and printing. The DevTools team has also helped out: OOPIFs can now be shown in the main frame's inspector window, and DevTools extensions are now more fully isolated from DevTools processes. As ever, many thanks to all those in the Chromium community who help make the web more secure! Q1 2017Greetings and salutations, It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Here’s a recap from last quarter: Our Bugs-- effort aims to find (and exterminate) security bugs. In order to get bugs fixed faster, we released a new tool to improve developer experience when trying to reproduce ClusterFuzz bugs. We have overhauled a significant part of the ClusterFuzz UI which now feature a new fuzzer statistics page, crash statistics page and fuzzer performance analyzer. We’ve also continued to improve our OSS-Fuzz offering, adding numerous features requested by developers and reaching 1000 bugs milestone with 47 projects in just five months since launch. Members of the Chrome Security team attended the 10th annual Pwn2Own competition at CanSecWest. While Chrome was again a target this year, no team was able to demonstrate a fully working chain to Windows SYSTEM code execution in the time allowed! Bugs still happen, so our Guts effort builds in multiple layers of defense. Chrome 56 takes advantage of Control Flow Guard (CFG) on Windows for Microsoft system DLLs inside the Chrome.exe processes. CFG makes exploiting corruption vulnerabilities more challenging by limiting valid call targets, and is available from Win 8.1 Update 3. Site Isolation makes the most of Chrome's multi-process architecture to help reduce the scope of attacks. The big news in Q1 is that we launched --isolate-extensions to Chrome Stable in Chrome 56! This first use of out-of-process iframes (OOPIFs) ensures that web content is never put into an extension process. To maintain the launch and prepare for additional uses of OOPIFs, we fixed numerous bugs, cleaned up old code, reduced OOPIF memory usage, and added OOPIF support for more features (e.g., IntersectionObserver, and hit testing and IME on Android). Our next step is expanding the OOPIF-based <webview> trial from Canary to Dev channel and adding more uses of dedicated processes. Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. Over the holidays, Google's security team gave us a holiday gift consisting entirely of interesting ways to bypass CSP's nonces. We've fixed some obvious bugs they uncovered, and we'll continue working with other vendors to harden the spec and our implementations. In other CSP news, we polished a mechanism to enforce CSP on child frames, shipped a `script-sample` property in CSP reports, and allowed hashes to match external scripts. We're also gathering data to support a few dangling markup mitigations, and dropped support for subresource URLs with embedded credentials and legacy protocols. We also spend time building security features that users see. To protect users from Data URI phishing attacks, Chrome shows the “not secure” warning on Data URIs and intends to deprecate and remove content-initiated top-frame navigations to Data URIs. We also brought AIA fetching to Chrome for Android, and early metrics show over an 85% reduction in the fraction of HTTPS warnings caused by misconfigured certificate chains on Android. We made additional progress on improving Chrome’s captive portal detection. Chrome now keeps precise attribution of where bad downloads come from, so we can catch malware and UwS earlier. Chrome 57 also saw the launch of a secure time service, for which early data shows detection of bad client clocks when validating certificates improving from 78% to 95%. We see migration to HTTPS as foundational to any web security whatsoever, so we're actively working to drive #MOARTLS across Google and the Internet at large. To help people understand the security limitations of non-secure HTTP, Chrome now marks HTTP pages with passwords or credit card form fields as “not secure” in the address bar, and is experimenting with in-form contextual warnings. We’ll remove support for EME over non-secure origins in Chrome 58, and we’ll remove support for notifications over non-secure origins in Chrome 61. We talked about our #MOARTLS methodology and the HTTPS business case at Enigma. In addition to #MOARTLS, we want to ensure more secure TLS through work on protocols and the certificate ecosystem. TLS 1.3 is the next, major version of the Transport Layer Security protocol. In Q1, Chrome tried the first, significant deployment of TLS 1.3 by a browser. Based on what we learned from that we hope to fully enable TLS 1.3 in Chrome in Q2. In February, researchers from Google and CWI Amsterdam successfully mounted a collision attack against the SHA-1 hash algorithm. It had been known to be weak for a very long time, and in Chrome 56 dropped support for website certificates that used SHA-1. This was the culmination of a plan first announced back in 2014, which we've updated a few times since. As ever, many thanks to all those in the Chromium community who help make the web more secure! Cheers Andrew, on behalf of the Chrome Security Team For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org. You can find older updates at https://dev.chromium.org/Home/chromium-security/quarterly-updates. Q4 2016Greetings and salutations, It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Here’s a recap from the last quarter of 2016: Our Bugs-- effort aims to find (and exterminate) security bugs. We announced OSS-Fuzz, a new Beta program developed over the past years with the Core Infrastructure Initiative community. This program will provide continuous fuzzing for select core open source software. See full blog post here. So far, more than 50 projects have been integrated with OSS-Fuzz and we found ~350 bugs. Security bugs submitted by external researchers can receive cash money from the Chrome VRP. Last year the Chrome VRP paid out almost one million dollars! More details in a blog post we did with our colleagues in the Google and Android VRPs. Bugs still happen, so our Guts effort builds in multiple layers of defense. Win32k lockdown for Pepper processes, including Adobe Flash and PDFium was shipped to Windows 10 clients on all channels in October 2016. Soon after the mitigation was enabled, a Flash 0-day that used win32k.sys as a privilege escalation vector was discovered being used in the wild, and this was successfully blocked by this mitigation! James Forshaw from Project Zero also wrote a blog about the process of shipping this new mitigation. A new security mitigation on >= Win8 hit stable in October 2016 (Chrome 54). This mitigation disables extension points (legacy hooking), blocking a number of third-party injection vectors. Enabled on all child processes - CL chain. As usual, you can find the Chromium sandbox documentation here. Site Isolation makes the most of Chrome's multi-process architecture to help reduce the scope of attacks. Our earlier plan to launch --isolate-extensions in Chrome 54 hit a last minute delay, and we're now aiming to turn it on in Chrome 56. In the meantime, we've added support for drag and drop into out-of-process iframes (OOPIFs) and for printing an OOPIF. We've fixed several other security and functional issues for --isolate-extensions as well. We've also started an A/B trial on Canary to use OOPIFs for Chrome App <webview> tags, and we're close to starting an A/B trial of --top-document-isolation. Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. After a good deal of experimentation, we (finally) tightened the behavior of cookies' `secure` attribute. Referrer Policy moved to a candidate recommendation, we made solid progress on Clear-Site-Data, and we expect to start an origin trial for Suborigins shortly. Looking to the future, we've started to flesh out our proposal for stronger origin isolation properties, continued discussions on a proposal for setting origin-wide policy, and began working with the IETF to expand opt-in Certificate Transparency enforcement to the open web. We hope to further solidify all of these proposals in Q1. We also spend time building security features that users see. Our security indicator text labels launched in Chrome 55 for “Secure” HTTPS, “Not Secure” broken HTTPS, and “Dangerous” pages flagged by Safe Browsing. As part of our long-term effort to mark HTTP pages as non-secure, we built address-bar warnings into Chrome 56 to mark HTTP pages with a password or credit card form fields as “Not secure”. We see migration to HTTPS as foundational to any web security whatsoever, so we're actively working to drive #MOARTLS across Google and the Internet at large. We added a new HTTPS Usage section to the Transparency Report, which shows how the percentage of Chrome pages loaded over HTTPS increases with time. We talked externally at O’Reilly Security NYC + Amsterdam and Chrome Dev Summit about upcoming HTTP UI changes and the business case for HTTPS. We published positive stories about HTTPS migrations. In addition to #MOARTLS, we want to ensure more secure TLS. We concluded our experiment with post-quantum key agreement in TLS. We implemented TLS 1.3 draft 18, which will be enabled for a fraction of users with Chrome 56. And here are some other areas we're still investing heavily in: Keeping users safe from Unwanted Software (UwS, pronounced 'ooze') and improving the Chrome Cleanup Tool, which has helped millions remove UwS that was injecting ads, changing settings, and otherwise blighting their machines. Working on usable, understandable permissions prompts. We're experimenting with different prompt UIs, tracking prompt interaction rates, and continuing to learn how best to ensure users are in control of powerful permissions. As ever, many thanks to all those in the Chromium community who help make the web more secure! Cheers Andrew, on behalf of the Chrome Security Team For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org. You can find older updates at https://dev.chromium.org/Home/chromium-security/quarterly-updates. Q3 2016Greetings and salutations! It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Here’s a recap from last quarter: Our Bugs-- effort aims to find (and exterminate) security bugs. We have continued to improve upon our libFuzzer and AFL integration with ClusterFuzz, which includes automated performance analysis and quarantining of bad units (like slow units, leaks, etc). We have scaled our code coverage to ~160 targets with help from Chrome developers, who contributed these during the month-long Fuzzathon. We have improved our infrastructure reliability and response times by adding a 24x7 monitoring solution, and fixing more than two dozen fuzzers in the process. Finally, we have refined our crash bucketization algorithm and enabled automatic bug filing remove human latency in filing regression bugs — long live the machines! For Site Isolation, the first uses of out-of-process iframes (OOPIFs) have reached the Stable channel in Chrome 54! We're using OOPIFs for --isolate-extensions mode, which ensures that web content is never put into a privileged extension process. In the past quarter, we made significant progress and fixed all our blocking bugs, including enabling the new session history logic by default, supporting cross-process POST submissions, and IME in OOPIFs. We also fixed bugs in painting, input events, and many other areas. As a result, --isolate-extensions mode has been enabled for 50% of M54 Beta users and is turned on by default in M55. From here, we plan to further improve OOPIFs to support --top-document-isolation mode, Chrome App <webview> tags, and Site Isolation for real web sites. We also spend time building security features that users see. We overhauled Chrome’s site security indicators in Chrome 52 on Mac and Chrome 53 on all other platforms, including adding new icons for Safe Browsing. These icons were the result of extensive user research which we shared in a peer-reviewed paper. Lastly, we made recovering blocked-downloads much less confusing. We like to avoid showing unnecessarily scary warnings when we can. We analyzed data from opted-in Safe Browsing Extended Reporting users to quantify the major causes of spurious TLS warnings, like bad client clocks and misconfigured intermediate certificates. We also launched two experiments, Expect-CT and Expect-Staple, to help site owners deploy advanced new TLS features (Certificate Transparency and OCSP stapling) without causing warnings for their users. Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. We continued to lock down the security of the web platform while also expanding capabilities to developers. We helped lock down cookies by starting to ship Strict Secure Cookies. Similarly, we also shipped the Referrer Policy spec and policy header. Content Security Policy was expanded with the strict-dynamic and unsafe-hashed-attributes directives. Our work on suborigins continued, updating the serialization and adding new web platform support. We've also been working on making users feel more in control of powerful permissions. In M55 and M56 we will be running experiments on permissions prompts to evaluate how this affects acceptance and decision rates. The experiments are to let users make temporary decisions, to auto-deny prompts if users keep ignoring them, and making permission prompts modal. We see migration to HTTPS as foundational to any web security whatsoever, so we're actively working to drive #MOARTLS across Google and the Internet at large. We announced concrete steps towards marking HTTP sites as non-secure in Chrome UI — starting with marking HTTP pages with password or credit card form fields as “Not secure” starting in Chrome 56 (Jan 2017). We added YouTube and Calendar to the HTTPS Transparency Report. We’re also happy to report that www.google.com uses HSTS! In addition to #MOARTLS, we want to ensure more secure TLS. We continue to work on TLS 1.3, a major revision of TLS. For current revisions, we’re also keeping the TLS ecosystem running smoothly with a little grease. We have removed DHE based ciphers and added RSA-PSS. Finally, having removed RC4 from Chrome earlier this year, we’ve now removed it from BoringSSL’s TLS logic completely. We launched a very rough prototype of Roughtime, a combination of NTP and Certificate Transparency. In parallel we’re investigating what reduction in Chrome certificate errors a secure clock like Roughtime could give us. We also continued our experiments with post-quantum cryptography by implementing CECPQ1 to help gather some real world data. As ever, many thanks to all those in the Chromium community who help make the web more secure! Cheers Andrew on behalf of the Chrome Security Team Q2 2016Greetings Earthlings, It's time for another update from your friends in Chrome Security, who are hard at work trying to make Chrome the most secure platform to browse the Internet. Here’s a recap from last quarter: Our Bugs-- effort aims to find (and exterminate) security bugs. At the start of the quarter, we initiated a team-wide Security FixIt to trim the backlog of open issues… a bit of Spring cleaning our issue tracker, if you will :) With the help of dozens of engineers across Chrome, we fixed over 61 Medium+ severity security bugs in 2 weeks and brought the count of open issues down to 22! On the fuzzing front, we’ve added support for AFL and continued to improve the libFuzzer-ClusterFuzz integration, both of which allow coverage-guided testing on a per-function basis. The number of libFuzzer based fuzzers have expanded from 70 to 115, and we’re processing ~500 Billion testcases every day! We’re also researching new ways to improve fuzzer efficiency and maximize code coverage (example). In response to recent trends from Vulnerability Reward Program (VRP) and Pwnium submissions, we wrote a new fuzzer for v8 builtins, which has already yielded bugs. Not everything can be automated, so we started auditing parts of mojo, Chrome’s new IPC mechanism, and found several issues (1, 2, 3, 4, 5). Bugs still happen, so our Guts effort builds in multiple layers of defense. Many Android apps use WebView to display web content inline within their app. A compromised WebView can get access to an app’s private user data and a number of Android system services / device drivers. To mitigate this risk, in the upcoming release of Android N, we’ve worked to move WebView rendering out-of-process into a sandboxed process. This new process model is still experimental and can be enabled under Developer Options in Settings. On Windows, a series of ongoing stability experiments with App Container and win32k lockdown for PPAPI processes (i.e. Flash and pdfium) have given us good data that puts us in a position to launch both of these new security mitigations on Windows 10 very soon! For Site Isolation, we're getting close to enabling --isolate-extensions for everyone. We've been hard at work fixing launch blocking bugs, and out-of-process iframes (OOPIFs) now have support for POST submissions, fullscreen, find-in-page, zoom, scrolling, Flash, modal dialogs, and file choosers, among other features. We've also made lots of progress on the new navigation codepath, IME, and the task manager, along with fixing many layout tests and crashes. Finally, we're experimenting with --top-document-isolation mode to keep the main page responsive despite slow third party iframes, and with using OOPIFs to replace BrowserPlugin for the <webview> tag. We also spend time building security features that users see. We’re overhauling the omnibox security iconography in Chrome -- new, improved connection security indicators are now in Chrome Beta (52) on Mac and Chrome Dev (53) for all other platforms. We created a reference interstitial warning that developers can use for their implementations of the Safe Browsing API. Speaking of Safe Browsing, we’ve extended protection to cover files downloaded by Flash apps, we’re evaluating many more file types than before, and we closed several gaps that were reported via our Safe Browsing Download Protection VRP program. Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. We shipped an implementation of the Credential Management API (and presented a detailed overview at Google I/O), iterated on Referrer Policy with a `referrer-policy` header implementation behind a flag, and improved our support for SameSite cookies. We're continuing to experiment with Suborigins with developers both inside and outside Google, built a prototype of CORS-RFC1918, and introduce safety nets to protect against XSS vulnerabilities due to browser bugs[1]. We've also been working on making users feel more in control of powerful permissions. All permissions will soon be scoped to origins, and we've started implementing permission delegation (which is becoming part of feature policy). We’re also actively working to show fewer permission prompts to users, and to improve the prompts and UI we do show... subtle, critical work that make web security more human-friendly (and thus, effective). We see migration to HTTPS as foundational to any web security whatsoever, so we're actively working to drive #MOARTLS across Google and the Internet at large. Emily and Emily busted HTTPS myths for large audiences at Google I/O and the Progressive Web App dev summit. The HSTS Preload list has seen 3x growth since the beginning of the year – a great problem to have! We’ve addressed some growth hurdles by a rewrite of the submission site, and we’re actively working on the preload list infrastructure and how to additionally scale in the long term. In addition to #MOARTLS, we want to ensure more secure TLS. Some of us have been involved in the TLS 1.3 standardization work and implementation. On the PKI front, and as part of our Expect CT project, we built the infrastructure in Chrome that will help site owners track down certificates for their sites that are not publicly logged in Certificate Transparency logs. As of Chrome 53, we’ll be requiring Certificate Transparency information for certificates issued by Symantec-operated CAs, per our announcement last year. We also launched some post-quantum cipher suite experiments to protect everyone from... crypto hackers of the future and more advanced worlds ;) For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org. You can find older updates at https://dev.chromium.org/Home/chromium-security/quarterly-updates. Happy Hacking, Parisa, on behalf of Chrome Security [1] Please let us know if you manage to work around them! Q1 2016Greetings web fans, The Bugs-- effort aims to find (and exterminate) security bugs. On the fuzzing front, we’ve continued to improve the integration between libFuzzer and ClusterFuzz, which allows coverage-guided testing on a per-function basis. With the help of many developers across several teams, we’ve expanded our collection of fuzzing targets in Chromium (that use libFuzzer) to 70! Not all bugs can be found by fuzzing, so we invest effort in targeted code audits too. We wrote a guest post on the Project Zero blog describing one of the more interesting vulnerabilities we discovered. Since we find a lot of bugs, we also want to make them easier to manage. We’ve updated our Sheriffbot tool to simplify the addition of new rules and expanded it to help manage functional bugs in addition just security issues. We’ve also automated assigning security severity recommendations. Finally, we continue to run our vulnerability reward program to recognize bugs discovered from researchers outside of the team. As of M50, we’ve paid out over $2.5 million since the start of the reward program, including over $500,000 in 2015. Our median payment amount for 2015 was $3,000 (up from $2,000 for 2014), and we want to see that increase again this year! Bugs still happen, so our Guts effort builds in multiple layers of defense. On Android, our seccomp-bpf experiment has been running on the Dev channel and will advance to the Stable and Beta channels with M50. Chrome on Windows is evolving rapidly in step with the operating system. We shipped four new layers of defense in depth to take advantage of the latest capabilities in Windows 10, some of which patch vulnerabilities found by our own research and feedback! There was great media attention when these changes landed, from Ars Technica to a Risky Business podcast, which said: “There have been some engineering changes to Chrome on Windows 10 which look pretty good. … It’s definitely the go-to browser, when it comes to not getting owned on the internet. And it’s a great example of Google pushing the state of the art in operating systems.” For our Site Isolation effort, we have expanded our on-going launch trial of --isolate-extensions to include 50% of both Dev Channel and Canary Channel users! This mode uses out-of-process iframes (OOPIFs) to keep dangerous web content out of extension processes. (See here for how to try it.) We've fixed many launch blocking bugs, and improved support for navigation, input events, hit testing, and security features like CSP and mixed content. We improved our test coverage and made progress on updating features like fullscreen, zoom, and find-in-page to work with OOPIFs. We're also excited to see progress on other potential uses of OOPIFs, including the <webview> tag and an experimental "top document isolation" mode. We spend time building security features that people see. In response to user feedback, we’ve replaced the old full screen prompt with a new, lighter weight ephemeral message in M50 across Windows and Linux. We launched a few bug fixes and updates to the Security panel, which we continue to iterate on and support in an effort to drive forward HTTPS adoption. We also continued our work on removing powerful features on insecure origins (e.g. geolocation). We’re working on preventing abuse of powerful features on the web. We continue to support great “permissions request” UX, and have started reaching out to top websites to directly help them improve how they request permissions for powerful APIs. To give top-level websites more control over how iframes use permissions, we started external discussions about a new Permission Delegation API. We also extended our vulnerability rewards program to support Safe Browsing reports, in a first program of its kind. Beyond the browser, our web platform efforts foster cross-vendor cooperation on developer-facing security features. We now have an implementation of Suborigins behind a flag, and have been experimenting with Google developers on usage. We polished up the Referrer Policy spec, refined its integration with ServiceWorker and Fetch, and shipped the `referrerpolicy` attribute from that document. We're excited about the potential of new CSP expressions like 'unsafe-dynamic', which will ship in Chrome 52 (and is experimentally deployed on our shiny new bug tracker). In that same release, we finally shipped SameSite cookies, which we hope will help prevent CSRF. Lastly, we're working to pay down some technical debt by refactoring our Mixed Content implementation and X-Frame-Options to work in an OOPIF world. We see migration to HTTPS as foundational to any security whatsoever (and we're not the only ones), so we're actively working to drive #MOARTLS across Google and the Internet at large. We worked with a number of teams across Google to help publish an HTTPS Report Card, which aims to hold Google and other top sites accountable, as well as encourage others to encrypt the web. In addition to #MOARTLS, we want to ensure more secure TLS. We mentioned we were working on it last time, but RC4 support is dead! The insecure TLS version fallback is also gone. With help from the libFuzzer folks, we got much better fuzzing coverage on BoringSSL, which resulted in CVE-2016-0705. We ended up adding a "fuzzer mode" to the SSL stack to help the fuzzer get past cryptographic invariants in the handshake, which smoked out some minor (memory leak) bugs. Last, but not least, we rewrote a large chunk of BoringSSL's ASN.1 parsing with a simpler and more standards-compliant stack. For more thrilling security updates and feisty rants, subscribe to security-dev@chromium.org. You can find older updates at https://dev.chromium.org/Home/chromium-security/quarterly-updates. Happy Hacking, Parisa, on behalf of Chrome Security
|
Chromium > Chromium Security >