《Google:2024安全设计-Google对内存安全的洞察白皮书(英文版)(13页).pdf》由会员分享,可在线阅读,更多相关《Google:2024安全设计-Google对内存安全的洞察白皮书(英文版)(13页).pdf(13页珍藏版)》请在三个皮匠报告上搜索。
1、Google Security Engineering Technical ReportMarch 4,2024Secure by Design:Googles Perspective on Memory SafetyAlex RChristoph KExecutive Summary2022 marked the 50thanniversary of memory safety vulnera-bilities,first reported by Anderson 2.Half a century later,weare still dealing with memory safety bu
2、gs despite substantialinvestments to improve memory unsafe languages.Like others,Googles internal vulnerability data and re-search show that memory safety bugs are widespread and oneof the leading causes of vulnerabilities in memory-unsafe code-bases.Those vulnerabilities endanger end users,our indu
3、stry,and the broader society.At Google,we have decades of experience addressing,atscale,large classes of vulnerabilities that were once similarlyprevalent as memory safety issues.Based on this experience weexpect that high assurance memory safety can only be achievedvia a Secure-by-Design approach c
4、entered around comprehen-sive adoption of languages with rigorous memory safety guaran-tees.As a consequence,we are considering a gradual transitiontowards memory-safe languages.Over the past decades,Google has developed and accumu-lated hundreds of millions of lines of C+code that is in activeuse a
5、nd under active,ongoing development.This very large ex-isting codebase results in significant challenges for a transitionto memory safety:On one hand,we see no realistic path for an evolution ofC+into a language with rigorous memory safety guaran-tees that include temporal safety.At the same time,a
6、large-scale rewrite of existing C+code into a different,memory-safe language appears verydifficult and will likely remain impractical.This means that we will likely be operating a very substantialC+codebase for quite some time.We thus consider it impor-tant to complement a transition to memory safe
7、languagesfor new code and particularly at-risk components with safetyimprovements for existing C+code,to the extent practica-ble.We believe that substantial improvements can be achievedthrough an incremental transition to a partially-memory-safeC+language subset,augmented with hardware security fea-
8、tures when available.Defining Memory Safety BugsMemory safety bugs arise when a program allows statementsto execute that read or write memory,when the program is in astate where the memory access constitutes undefined behavior.When such a statement is reachable in a program state underadversarial co
9、ntrol(e.g.,processing untrusted inputs),the bugoften represents an exploitable vulnerability(in the worst case,permitting arbitrary code execution).Defining Rigorous Memory SafetyIn this context,we consider a language rigorously memory-safe if it:Defaults to a well-delineated safe subset,andEnsures
10、that arbitrary code written in the safe subset isprevented from causing a spatial,temporal,type,or initial-ization safety violation1This can be established through any combination ofcompile-time restrictions and runtime protections pro-1Under the assumption that all unsafe code that is part of the p
11、rogram issound.vided the runtime mechanisms guarantee that safety viola-tion cannot occur.With very few,well-defined exceptions,all code should bewritable in the well-delineated safe subset.In new development,potentially unsafe code should onlyoccur in components/modules that explicitly opt into use
12、 of un-safe constructs outside of the safe language subset,and exposea safe abstraction that is expert-reviewed for soundness.Unsafeconstructs should only be used when necessary,e.g.for criticalperformance reasons or in code that interacts with low-levelcomponents.When working with existing code in
13、a non-memory-safelanguage,unsafe code should be restricted to uses including:Code written in a safe language that makes calls into alibrary implemented by a legacy codebase written in anunsafe language.Code additions/modifications to existing unsafe legacycode bases,where code is too deeply intermin
14、gled to makedevelopment in a safe language practical.Impact of Memory Safety Vulnerabili-tiesMemory safety bugs are responsible for the majority(70%)of severe vulnerabilitiesin large C/C+code bases.Beloware the percentage of vulnerabilities due to memory unsafety:Chrome:70%of high/critical vulnerabi
15、lities 17 Android:70%of high/critical vulnerabilities27 Google servers:16-29%of vulnerabilities3 Project Zero:68%of in-the-wild zero days 10 Microsoft:70%of vulnerabilities with CVEs 16Memory safety errors continue to appear at the top of“mostdangerous bugs”lists such as CWE Top 25and CWE Top10of Kn
16、own Exploited Vulnerabilities.Googles inter-nal vulnerability research repeatedly demonstrates that lack ofmemory safety weakens important security boundaries.2The fraction of memory safety vulnerabilities has gone down over the lastfew years thanks to memory safety improvements.3The range reflects
17、uncertainty around automated severity assessment ofmemory safety issues found by our automation,e.g.by fuzzing.Also note thatthis is across all workloads,including those written in memory-safe languagessuch as Go and Java/Kotlin.Understanding Memory Safety BugsClasses of Memory Safety BugsIt can be
18、helpful to distinguish a number of subclasses of mem-ory safety bugs that differ in their possible solutions and theimpact on performance and developer experience thereof:Spatial Safety bugs(e.g.“buffer overflow”,“out ofbounds access”)occur when a memory access refers tomemory outside of the accesse
19、d objects allocated region.Temporal Safety bugs arise when a memory access to anobject occurs outside of the objects lifetime.An exampleis when a function returns a pointer to a value in its stackframe(“use-after-return”),or due to a pointer to heap-allocated memory that has since been freed,and pos
20、siblyre-allocated for a different object(“use-after-free”).It is common in concurrent programs for these bugs tooccur due to improper thread synchronization,but whenthe initial safety violation is outside of the lifetime of theobject,we classify it as a temporal safety violation.Type Safety bugs ari
21、se when a value of a given type isread from memory that does not contain a member of thistype.An example of this is when memory is read after aninvalid pointer cast.Initialization Safety bugs arise when memory is readbefore being initialized.This can lead to informationdisclosures and type/temporal
22、safety bugs.Data-Race Safety bugs arise from unsynchronized readsand writes by different threads,which may access an ob-ject in an inconsistent state.It is possible for other formsof safety bugs to also arise from improper or missingsynchronization,however we do not classify these as data-race safet
23、y bugs and they are handled above.Only whenthe reads and writes are otherwise correct except for beingunsynchronized are they considered data-race safety bugs.Once a data-race safety violation has occurred,subsequentexecution may cause further safety bugs.We classify theseas data-race safety bugs as
24、 the initial violation is strictly adata-race issue without any other bugs evident.The classification used here roughly aligns with Applesmemory safety taxonomy 4.In unsafe languages such as C/C+,it is the programmersresponsibility to ensure the safety preconditions are met toavoid accessing invalid
25、 memory.For instance,for spatial safety,when accessing elements of an array via index(e.g.,ai=x),it is the programmers responsibility to ensure the safetyprecondition that the index is within the bounds of validly-allocated memory.Secure by Design:Googles Perspective on Memory Safety2We currently ex
26、clude data-race safety from considerationunder rigorous memory safety for the following reasons:Data-racesafetyisabugclassofitsown,andonlypartiallyoverlaps with memory safety.For example,Java doesnot provide data-race-safety guarantees,but data races inJava cannot cause violation of low-level heap i
27、ntegrityinvariants(memory corruption).We currently do not have the same level of evidence fordata-race unsafety leading to systemic security and relia-bility issues for software written in otherwise rigorouslymemory safe languages(e.g.Go).Why are Memory Safety Bugs so Intractable?Memory safety bugs
28、are quite common in large C+code bases.The intuition behind the prevalence of memory safety bugs isas follows:First,in unsafe languages,programmers are responsible forensuring that each statements memory safety preconditionholds just before it is executed,in any program state that couldbe possibly r
29、eached,potentially under the influence of adver-sarial inputs to the program.Secondly,unsafe statements that potentially result in mem-ory safety bugs are very common in C/C+programs thereare many array accesses,pointer dereferences,and heap alloca-tions.Finally,reasoning about safety preconditions
30、and whetherthe program ensures them in every possible program state isdifficult,even with tool assistance.For example:Reasoning about the in-bounds-ness of a pointer/indexinvolves wrapping integer arithmetic,which is quite non-intuitive to humans.Reasoningaboutthelifetimeofheapobjectsofteninvolvesco
31、mplicated and subtle whole-program invariants.Evenlocal scoping and lifetime can be subtle and surprising.“Many potential bugs”combined with“difficult reasoningabout safety preconditions”and“humans make mistakes”re-sults in a relatively significant number of actual bugs.Attemptstomitigatetheriskofme
32、morysafetyvulnerabilitiesthrough developer education and reactive approaches(includingstatic/dynamic analysis to find and fix bugs,and various exploitmitigations)have failed to lower the incidence of these bugs toa tolerable level.As a result,severe vulnerabilities continue tobe caused by this class
33、 of vulnerabilities as discussed above.Tackling Memory Safety BugsTackling memory safety requires a multi-pronged approachconsisting of:Preventing memory safety bugs through Safe Coding.Mitigating memory safety bugs by making exploitationmore expensive.Detecting memory safety bugs,as early as possib
34、le in thedevelopment lifecycle.We believe that all three are necessary for solving memorysafety at Googles scale.Based on our experience,a strongemphasis on prevention through safe coding is necessary tosustainably achieve high assurance.Preventing Memory Safety Bugs through SafeCodingOur experience
35、 at Google shows that we can engineer awayclasses of problems at scale by eliminating the use ofvulnerability-prone coding constructs.In this context,we con-sider a construct unsafe if it can potentially manifest a bug(e.g.memory corruption)unless a safety precondition is satisfiedat its time of use
36、.Unsafe constructs place the onus on thedeveloper to ensure the precondition.Our approach,which wecall“Safe Coding”,treats unsafe coding constructs themselvesas hazards(i.e.,independently of and in addition to the vul-nerability they might cause),and is centered around ensuringthat developers do not
37、 encounter such hazards during regularcoding practice 12.In essence,Safe Coding calls for unsafe constructs to bedisallowed by default,and their use to be replaced by safeabstractions in most code,with carefully-reviewed exceptions.In the domain of memory safety,safe abstractions may beprovided usin
38、g:Statically-or dynamically-ensured safety invariants,preventing the introduction of bugs.Compile-time checksand compiler-emitted or runtime-provided mechanismsguarantee that particular classes of bugs cannot occur.Forinstance:At compile-time,lifetime analysis prevents a subsetof temporal safety bug
39、s.At runtime,automated object initialization guaran-tees the absence of uninitialized reads.Runtime error detection,enforcing memory safety in-variants by raising an error when a memory safety vi-olation is detected instead of continuing execution withcorrupted memory.The underlying bugs still exist
40、 and willneed to be fixed4,but the vulnerabilities are eliminated(modulo denial-of-service attacks5).For instance:4Runtime error detection helps root-cause crashes by precisely pinpointingthe underlying memory safety bug.5Runtime errors can typically be caught and recovered from;e.g.an out-of-bounds
41、 access in Go raises a recoverablerun-time panic.This allowsSecure by Design:Googles Perspective on Memory Safety3An array lookup may offer spatial safety error de-tection by verifying the given index is in-bounds.Checks may be elided where safety is proven stati-cally.A type cast may offer type saf
42、ety error detectionby checking that the casted object is an instance ofthe resulting type(e.g.ClassCastExceptioninJava or CastGuardfor C+).In the memory safety domain,the Safe Coding approach isembodied by safe languages,which replace unsafe constructswith safe abstractions such as runtime bounds ch
43、ecks,garbage-collected references,or references adorned with statically-checked lifetime annotations.Experience shows that memory safety issues are indeed rarein safe,garbage-collected languages such as Go and Java.How-ever,garbage collection typically comes with significant run-time overhead.More r
44、ecently,Rust has emerged as a languagethat embodies the Safe Coding approach based primarily oncompile-time checked type discipline,resulting in minimalruntime overheads.Data shows that safe coding works for memory safety,evenin performance sensitive environments.For instance,Android13 introduced 1.
45、5M lines of Rustwith zero memory safetyvulnerabilities.This prevented an estimated hundreds of mem-ory safety vulnerabilities:“As the amount of new memoryunsafecodeenteringAndroidhasdecreased,sotoohasthenum-ber of memory safety vulnerabilities.2022 was the firstyear where memory safety vulnerabiliti
46、es did not represent amajority of Androids vulnerabilities.While correlation doesntnecessarily mean causation,.the shift is a major departurefrom industry-wide trends listed above that have persisted formore than a decade”.As another example,Cloudflare reports that their Rust HTTPproxy outperformsNG
47、INX,and has“served a few hundredtrillion requests and has yet to crash due to our service code.”By applying a subset of preventative memory safety mech-anisms to an unsafe language such as C+,we can partiallyprevent classes of memory safety issues.For instance:A buffer hardening RFCmay eliminate a s
48、ubset of spa-tial safety issues in C+.Similarly,a bounds safety RFCmay eliminate a subsetof spatial safety issues in C.Lifetime annotationsin C+may eliminate a subset oftemporal safety issues.servers to safely recover from runtime errors raised during processing of arequest,without crashing the enti
49、re process.Runtime errors raised in serverframeworks code itself may not be recoverable.Exploit MitigationsExploit mitigations complicate exploitation of memory safetyvulnerabilities,rather than fixing the root cause of these vulnera-bilities.For instance,mitigations include sandboxing of unsafelibr
50、aries,control-flow integrity,and data execution prevention.While safe abstractions prevent memory corruption,denyingexploitation primitives to attackers,exploit mitigations assumethat memory can be corrupted.Exploit mitigations aim tomake it difficult for attackers to escalate from some exploitation
51、primitives to unrestricted code execution.Attackersregularlybypassthesemitigations,raisingtheques-tion of their security value.To be useful,mitigations shouldrequire attackers to chain additional vulnerabilities,or invent anovel bypass technique.Over time,bypass techniques becomemore valuableto atta
52、ckersthan any single vulnerabilities.Thesecurity benefit of a well-designed mitigation lies in the factthat bypass techniques should be far rarer than vulnerabilities.Exploit mitigations rarely come for free;they tend to incur aruntime overhead that is generally a low single-digit percent-age.They p
53、rovide a tradeoff between security and performance,which we can adjust based on each workloads needs.Runtimeoverheads can be reduced by building mitigations directly inthe silicon,as was done for pointer authentication,shadowcall stack,landing pads,and protection keys.Due to theiroverhead and opport
54、unity costs of hardware features,consider-ations around adoption of,and investment in,those techniquesare nuanced.In our experience,sandboxing is an effective mitigation formemory safety vulnerabilities and is commonly used at Googleto isolate brittle libraries with a history of vulnerabilities.How-
55、ever,there are several challenges to the adoption of sandboxing:Sandboxing can incur significant overheads in latencyand bandwidth,as well as costs for the required coderefactoring.This sometimes necessitates reuse of sandboxinstances across requests,which weakens the mitigation.Creating a sandbox p
56、olicy that is sufficiently restrictive tobe an effective mitigation can be challenging for develop-ers,especially when sandbox polices are expressed at alow level of abstraction,such as system calls filters.Sandboxing can cause reliability risks,when unusual(butbenign)code paths are exercised in pro
57、duction and triggersandbox policy violations.Overall,exploit mitigations are an essential tool in improvingthe security of a large pre-existing C+code base,and willalso benefit residual use of unsafe constructs in memory-safelanguages.Secure by Design:Googles Perspective on Memory Safety4Finding Mem
58、ory Safety BugsStatic analysis and fuzzing are effective tools for detectingmemory safety bugs.They reduce the volume of memory safetybugs in our code base as developers fix the detected issues.However,in our experience,bug finding alone does notachieve an acceptable level of assurance for memory-un
59、safe lan-guages.As an example,the recent webp high severity 0-day(CVE-2023-4863)affected extensively fuzzed code.The vul-nerability was missed despite high fuzzing coverage(97.55%in the relevant file).In practice,we miss many memory safetybugs,as demonstrated by the steady stream of memory safetyvul
60、nerabilities in well-tested memory-unsafe code.In addition,finding bugs does not in itself improve security.The bugs must be fixed and the patches deployed.There isevidence suggesting that bug finding capabilities are outpacingbug fixing capacity.For instance,syzkaller,our kernel fuzzer,has found 5k
61、+bugs in the upstream Linux kernel,such thatat any given time there are hundreds of open bugs(a largefraction of which are likely security-relevant),a number that isbeen steadily growing since 2017.We nevertheless believe that bug finding is an essential partof tackling memory unsafety.Bug finding t
62、echniques that putless strain on bug fixing capacity are particularly valuable:“Shifting-left”,such as fuzzing in presubmit,reduces therate of new bugs shipped to production.Bugs found earlierin the SDLC(software development life cycle)are cheaperto fix6,consequently increasing our bug fixing capaci
63、ty.Bug finding techniques,like static analysis,may also sug-gest fixes,which can be provided through the IDE or pullrequests,or applied automatically to proactively changeexisting code.Bug finding tools like sanitizers,which identify rootcauses and generate actionable bug reports,help develop-ers fi
64、x issues faster,also increasing our bug fixing capacity.Additionally,bug finding tools find bug classes beyond mem-ory safety,which broadens the impact of investing into thosetools.They can find reliability,correctness and other safetyissues,for instance:Property-based fuzzing finds inputs violating
65、 application-level invariants,such as correctness properties encoded bydevelopers.For instance,cryptofuzzhas found 150+bugs in crypto libraries.Fuzzing finds resource-usage bugs(e.g.infinite recur-sions),and plain crashes affecting availability.In particu-lar,runtime error detection(e.g.bounds check
66、ing)trans-6“The average cost of finding and fixing a bug increases about 10 times withevery step of the development process”,11forms memory safety vulnerabilities into runtime errors,which remain a reliability and DoS concern.Advances in detecting vulnerabilities beyond memorysafety are showing prom
67、ise.Deep Dive:Safe Coding Applied toMemory SafetyGoogle has developed Safe Coding,a scalable approach todrastically reduce the incidence of common classes of vul-nerabilities,and to achieve a high degree of assurance thatvulnerabilities are absent.Over the past decade,we have applied this approach v
68、erysuccessfully at Googles scale,primarily to so-called InjectionVulnerabilities,including SQL injection and XSS.While at atechnical level very different from memory safety bugs,thereare relevant parallels:Like memory safety bugs,injection bugs occur when adeveloper uses a potentially-unsafe code co
69、nstruct,andfails to ensure its safety precondition.Whether the precondition holds depends on complex rea-soning about whole-program,or whole-system,data flowinvariants.For example,the potentially-unsafe constructoccurs in browser-side code,but the data might arrive viaseveral microservices and a ser
70、ver-side datastore.Thismakes it hard to reason about where data really camefrom,and whether necessary validation has been correctlyapplied somewhere along the way.Potentially-unsafe constructs are common in typical codebases.As with memory safety bugs,“many 1000s of potentialbugs”led to 100s of actu
71、al bugs.Reactive approaches(codereview,pen testing,fuzzing)were largely unsuccessful.Toaddressthisissueatscaleandwithhighassurance,Googleapplied Safe Coding to the domain of injection vulnerabilities.This was unequivocally successful and resulted in a very sig-nificant reduction,and in some cases co
72、mplete elimination ofXSS vulnerabilities.For example,before 2012,web frontendslike GMail often had a few dozen XSS per year;after refactor-ing code to conform to Safe Coding requirements,defect rateshave dropped to near zero.The Google Photos web frontend(which has been developed from the start on a
73、 web applicationframework that comprehensively applies Safe Coding)has hadzero reported XSS vulnerabilities in its entire history.In the following sections,we discuss in more detail howthe Safe Coding approach applies to memory safety,and drawparallels to its successful use in eliminating classes of
74、 vulnera-bilities in the web security domain.Secure by Design:Googles Perspective on Memory Safety5Safe abstractionsIn our experience,the key to eliminating classes of bugs isto identify programming constructs(APIs or language-nativeconstructs)that cause these bugs,and then to eliminate the useof su
75、ch constructs in common programming practice.Thisrequires the introduction of safe constructs with equivalentfunctionality,which often take the form of safe abstractionsaround the underlying unsafe constructs.For example,XSS are caused by the use of Web PlatformAPIs that are unsafe to call with part
76、ially attacker-controlledstrings.To eliminate the use of these XSS-prone APIs inour code,we introduced a number of equivalent safe abstrac-tions,designed to collectively ensure that safety preconditionshold when the underlying unsafe constructs(APIs)are invoked.This includes type-safe API wrappers,v
77、ocabulary typeswithsafety contracts,and safe HTML templating systems.Safe abstractions to ensure memory safety preconditionsmight take the form of wrapper APIs in an existing language(e.g.Smart Pointersto be used in place of raw pointers,includingMiraclePtrwhich protects 50%of use-after-freeissues a
78、gainst exploitation in Chromes browser process),orconstructs closely tied to language semantics(for example,garbage collection in Go/Java;statically-checked lifetimes inRust).The design of safe constructs needs to navigate a 3-waytradeoff between runtime costs(CPU,memory,binary size,etc),development
79、-time costs(developer friction,cognitive load,build times),and expressiveness.For example,garbage collec-tion provides a general solution for temporal safety,but cancause problematic variability in performance 6.Rust lifetimescombined with the borrow checkerensure safety entirely atcompile time(at n
80、o runtime cost)for large classes of code7;however require more upfront effort by the programmer todemonstrate that the code is in fact safe.This is similar to howstatic typing requires more upfront effort compared to dynamictyping,but prevents a large swath of type errors at compiletime.Sometimes,de
81、velopers need to choose alternative idiomsto avoid runtime overhead.For example,the overhead of aruntime bounds check for indexed traversal of a vector can beavoided by using a range-for loop.To successfully reduce the incidence of bugs,a collection ofsafe abstractions needs to be sufficiently expre
82、ssive to allowmost code to be written without resorting to unsafe constructs(nor convoluted,non-idiomatic code that is technically safe,butdifficult to understand and maintain).7Exceptions include cyclical data structures,which can be implementedusing runtime-checked interior mutabilitySafe-by-defau
83、lt,unsafe-by-exceptionIn our experience,it is not sufficient to merely make safe ab-stractions available to developers on an optional basis(e.g.suggested by a style guide)as too many unsafe constructs,andhence too much risk of bugs,tend to remain.Rather,to achievea high degree of assurance that a co
84、debase is free of vulner-abilities,we have found it necessary to adopt a model whereunsafe constructs are used only by exception,enforced by thecompiler.This model consists of the following key elements:1.It is possible to decide at build time whether a program(or part of a program,e.g.a module)cont
85、ains unsafeconstructs.2.A program consisting only of safe code is guaranteed tomaintain safety invariants at runtime.3.Unsafe constructs are not permitted unless explicitly al-lowed/opted-into,i.e.code is safe by default.In our work on injection vulnerabilities,we achieved safetyat scale by restrict
86、ing access to unsafe APIs through language-level and build-timevisibility,and in some cases throughcustom static checks.In the context of memory safety,achieving safety at scale re-quiresthelanguage8toprohibittheuseofunsafeconstructs(e.g.unchecked indexing into arrays/buffers)by default.Unsafeconstr
87、ucts should cause a compile-time error unless a portionof code is explicitly opted into the unsafe subset as discussedin the next section.For example,Rust allows unsafe constructsonly inside clearly-delineated unsafe blocks.Soundness:Safely-encapsulated unsafe codeAs noted above,we assume that avail
88、able safe abstractionsare sufficiently expressive to allow most code to be writtenusing safe constructs only.In practice however,we expectmost larger programs to require use of unsafe constructs insome cases.In addition,the safe abstractions themselves willoften be wrapper APIs for underlying unsafe
89、 constructs.Forexample,the implementation of safe abstractions around heapmemory allocation/deallocation ultimately needs to deal withraw memory,e.g.mmap(2).When developers introduce(even small amounts)of unsafecode,it is important to do so without negating the benefits ofhaving written most of a pr
90、ogram using only safe code.To that end,developers should adhere to the following prin-ciple:Uses of unsafe constructs should be encapsulated indemonstrably-safe APIs.8In this broader context,this could mean a memory-safe language,or a safesubset of an otherwise unsafe language.Secure by Design:Googl
91、es Perspective on Memory Safety6That is,unsafe code should be encapsulated behind an APIthat is sound for any arbitrary(but well-typed)code calling thisAPI.It should be possible to demonstrate,and review/verify,that the module exposes a safe API surface without makingany assumptions about the callin
92、g code(other than its well-typedness).For example,suppose the implementation of a type uses apotentially-unsafe construct.Then it is the types implemen-tations responsibility to independently ensure that the unsafeconstructs precondition holds when it is invoked.The imple-mentation must not make any
93、 assumptions about the behaviorof its callers(besides well-typedness),for example that itsmethods are called in a certain order.In our work on injection vulnerabilities,this principle isembodied in guidelinesfor the use of so-called UncheckedConversions(which represent unsafe code in our vocabulary-
94、type discipline).In the Rust community,this property is calledSoundness13:a module withunsafeblocks is sound ifa program consisting of that module,combined with arbitrarywell-typed safe Rust,cannot exhibit Undefined Behavior.This principle can be difficult or impossible to adhere to incertain situat
95、ions,like when a program in a safe language(Rustor Go)calls into unsafe C+code.The unsafe library mightbe wrapped in a“reasonably safe”abstraction,but there is nopractical way to demonstrate that the implementation is trulysafe and does not have a memory safety bug.Expert review of unsafe codeReason
96、ing about unsafe code is difficult and can be error-prone,especially for non-experts:Reasoning about whether a module containing unsafe con-structs in fact exposes a safe abstraction requires domainexpertise.For example,in the web security domain,decidingif an unchecked conversion into the SafeHtml
97、vocab-ulary type is safe requires a detailed understandingof the HTML spec,and applicable data escaping andsanitization rules.Deciding whether Rust code withunsafeis soundrequires a deep understanding of unsafe Rust seman-tics and the boundaries of Undefined Behavior(anarea of active research).In ou
98、r experience,developers focused on solving a prob-lem at hand frequently do not seem to appreciate the im-portance of safely encapsulating unsafe code,and do notattempt to devise a safe abstraction.Expert review isneeded to steer those developers towards safe encapsula-tion,and to help design an app
99、ropriate safe abstraction.In the web security domain,we found it necessary to man-dateexpert review of unsafe constructs in many cases,likefor new uses of Unchecked Conversions.Without mandatoryreview we observed a large number of unnecessary/unsounduses of unsafe constructs,which diluted our abilit
100、y to reasonabout safety at scale.Mandatory review requirements need tocarefully consider the impact on developers and the bandwidthof the review team,and are likely only appropriate if they aresufficiently rare.Whole-Program Safety and CompositionalReasoningUltimately,our goal is to ensure an adequa
101、te safety posture foran entire binary.Binaries typically include a large number of direct and tran-sitive library dependencies.These are typically maintained bymany different teams within Google,or even externally in thecase of third party code.Yet,a memory safety bug in any of thedependencies can p
102、otentially result in a security vulnerabilityof the dependent binary.A safe language,combined with a development discipline toensure that unsafe code is encapsulated in sound,safe abstrac-tions,can enable us to scalably reason about the safety of largeprograms:Components written solely in the langua
103、ges safe subsetare by construction sound and free of safety violations.Components that do contain unsafe constructs expose safeabstractions to the rest of the program.For these com-ponents,expert review provides solid assurance of theirsoundness,and that they will not cause safety violationswhen com
104、bined with arbitrary other components.When all transitive dependencies fall into one of these twocategories,we have solid assurance that the entire program isfree of safety violations.Importantly,we do not need to reasonabout how each component interacts with every other compo-nent in the program;ra
105、ther we can arrive at this conclusionsolely based on reasoning about each component in isolation.To maintain and ensure assertions about whole programsafety over time,especially for security-critical binaries,weneed mechanisms to ensure constraints on the“soundness level”of all transitive dependenci
106、es of a binary(i.e.,whether theyconsist of safe code only or have been expert reviewed forsoundness).In practice,some transitive dependencies will have a lowerlevel of assurance for their soundness.For example,a third-party OSS dependency might use unsafe constructs,but is notstructured to expose cl
107、eanly-delineated safe abstractions thatare effectively reviewable for soundness.Or,a dependencySecure by Design:Googles Perspective on Memory Safety7might consist of an FFI wrapper into legacy code written en-tirely in an unsafe language,making it effectively impossibleto review for soundness to a h
108、igh degree of assurance.Security-critical binaries may want to express constraintssuchas“alltransitivedependenciesareeitherfreeofunsafecon-structs or are expert-reviewed for soundness,with the followingspecific exceptions”,where exceptions might be subject to ad-ditional scrutiny(e.g.extensive fuzz
109、coverage).This allows theowners of a critical binary to maintain a well-understood andacceptable level of residual unsafety risk.Memory Safety Guarantees and Trade-offsApplying Safe Coding principles to memory safety of a pro-gramming language and its surrounding ecosystem(libraries,program analysis
110、 tooling)involves tradeoffs,primarily betweencosts incurred at development time(e.g.,cognitive load placedon developers)and at deployment and run time.This section provides an overview of possible approaches tosub-classes of memory safety,and their associated tradeoffs.Spatial SafetySpatial safety i
111、s relatively straightforward to incorporate into alanguage and library ecosystem.The compiler and containertypes such as strings and vectors need to ensure that all accessesare checked to be in-bounds.Checks can be elided if provento be unnecessary based on static analysis or type invariants.Typical
112、ly,this means that type implementations need metadata(size/length)to check against.Approaches include:BoundschecksincorporatedintoAPIs(e.g.std:vector:operator with safety assertions).Compiler-inserted bounds checks,potentially aided byannotations.Hardware-support such as bounds-checked CHERI capa-bi
113、lities.Safe languages such as Rust,Go,Java,etc,and their standardlibraries,impose bounds checks for all indexed access.Theyare only elided if they can be proven redundant.It seems plausible,but has not been demonstrated for large-scale codebases like Googles monorepo or Linux kernel,thatanunsafelang
114、uagesuchasCorC+canbesubsettedtoachievespatial safety.Bounds checks incur a small,but unavoidable run-time over-head.It is up to the developer to structure code such that boundschecks can be elided where they would otherwise accumulateto a significant overhead.Type and Initialization SafetyMaking a l
115、anguage type and initialization safe may include:Disallowing type-unsafe code constructs such as(un-tagged)unions and reinterpret_cast.Compiler instrumentation that initializes values on stack(unless the compiler can prove that the value will not beread before a later explicit write).Container type
116、implementations that ensure that(accessi-ble)elements are initialized.9In statically-typed languages,type safety can be primarilyensured at compile time,without runtime overhead.However,thereissomepotentialforruntimeoverheadincertainscenarios,for example:Unions must include a discriminator at runtim
117、e,and berepresented as a type-safe higher-level construct(e.g.sumtypes).In some cases,the resulting memory overhead canbe optimized away,e.g.OptioninRust.There may be superfluous initializations of values that arenever read,but in a way that the compiler cannot prove.In cases where the overhead is s
118、ignificant(e.g.defaultinitialization of large vectors),it is the responsibility ofthe programmer to structure code such that superfluousinitializations can be avoided,for example through use ofreserve and push or optional types.Temporal SafetyTemporal safety is fundamentally a much harder problem th
119、anspatial safety:For spatial safety,it is possible to relativelycheaply instrument a program such that the safety precondi-tion can be checked via an inexpensive runtime check(boundscheck).In common cases it is straightforward to structure codesuch that bounds checks can be elided(e.g.using iterator
120、s).In contrast,there is no straightforward way to establish thesafety precondition for temporal safety of heap-allocated ob-jects.Pointers and allocations they point to,which in turn canthemselves contain pointers,induce a directed(possibly cyclic)graph.The graph induced by the sequence of allocatio
121、ns anddeallocations of an arbitrary program can get arbitrarily com-plex.It is in the general case impossible to infer properties ofthis graph based on static analysis of program code.9Zeroing memory may not be sufficient because not all types may have avalid zero value.Secure by Design:Googles Pers
122、pective on Memory Safety8When an allocation is freed,all that is at hand is the graphnode corresponding to this allocation.There is no a-prioriefficient(constant-time)way to determine whether there is stillanother inbound edge(i.e.another,still-reachable pointer intothis allocation).Deallocating an
123、allocation to which there arestill inbound pointers implicitly invalidates those pointers(turnsthem into“dangling”pointers).A future dereferencing of suchan invalid pointer would result in undefined behavior and a“useafter free”bug.Since the graph is directed,there is also no efficient(constant-time
124、,or even linear in the number of in-bound pointers)wayto find all still-reachable pointers into the about-to-be-deletedallocation.If available,this could be used to explicitly in-validate/null those pointers,or to defer deallocation until allinbound pointers are deleted from the graph.Consequently,w
125、henever a pointer is dereferenced,there isno efficient way to determine whether this operation constitutesundefined behavior because the pointer destination has alreadybeen freed.There broadly are three ways to achieve rigorous temporalsafety guarantees:1.Ensure through compile-time checking that a
126、pointer/ref-erence cannot outlive the allocation it points to.For ex-ample,Rust implements this approach through the borrowchecker and the exclusivity rule.This mode supportstemporal safety of both heap and stack objects.2.With runtime support,ensure that allocations are only deal-located when there
127、 are no valid pointers to it remaining.3.With runtime support,ensure that pointers become invalidwhen the allocation they point to is deallocated,and raisea fault if such an invalid pointer is later dereferenced.Several variations of 2 and 3 have been devised and theyincur a non-trivial amount of ru
128、ntime cost.Both referencecounting and garbage collection provide the desired safety butcan be expensive.Quarantining of deallocations is a strongmitigation,but does not fully guarantee safety and neverthelesscarries an overhead.Memory tagging relies on specializedhardware and only provides probabili
129、stic mitigation(unlesscombined with MarkUs 3,1).In all cases,for temporal safety,there is no cheap(let alonefree)lunch.Either developers structure and annotate codesuch that a compile-time checker(e.g,Rust borrow checker)can statically prove temporal safety,or we pay the runtimeoverhead to achieve s
130、afety or even partially mitigate these bugs.Unfortunately,temporal safety issues remain a large fractionof memory safety issues,as indicated by a variety of reports:Chrome:51%of high/critical memory safety vulnerabili-ties 17Android:20%of high/critical memory safety CVEs in2022Project Zero:33%of in-
131、the-wild memory safety ex-ploits 10 Microsoft:32%of memory safety CVEs 5GWP-ASan:finds 4x more UAFs than OOBs acrossmultiple ecosystems 19Runtime Techniques and TradeoffsA wide range of runtime instrumentation techniques have beenexplored to address temporal safety,but they all come with chal-lengin
132、g tradeoffs.They have to take into account concurrencywhen used in multi-threaded programs,and in many cases onlymitigate these bugs without providing guaranteed safety.Reference counting,either to provide the correctlifetime or to detect and prevent incorrect lifetimes.Variations of this technique
133、includestd:shared_ptr,RustsRc/Arc,automaticreferencecountinginSwift or Objective-C,and Chromes experiment withDanglingPointerDetector.Enforced exclusivitymay be used with reference counting to reduce itsoverhead,but not eliminate it.Garbage-collected heaps.Enforced exclusivity may alsobe combined wi
134、th GC to reduce overhead.Quarantining of deallocations,based on reference count-ing and allocation poisoning,as proposed by ChromesBackupRefPtr,or combined with traversal and inval-idation of pointers to quarantined deallocations,as pro-posed by MarkUs 1.These approaches avoid interferingwith destru
135、ctor timing,but may provide only a partial mit-igationrather than true temporal safety in some cases.They could be seen as variations of reference counting andgarbage collection that do not interfere with destructor tim-ing while preventing reallocation behind dangling pointers,but trade that off by
136、 introducing poison values(and result-ing undefined behavior)into the runtime if accessed afterbeing freed.Memory tagging labels pointers and allocated memoryregions with one of a small set of tags(colors).Whenmemory is deallocated and reallocated,it is re-coloredaccording to a defined strategy.This
137、 implicitly invalidatesremaining pointers which would still have the“old”color.In practice,the set of tags/colors is small(e.g.16 in thecase of ARM MTE 18).Thus in most cases it providesprobabilistic mitigation rather than true safety,as there isa non-trivial chance(e.g.,6.25%)that dangling pointers
138、are not marked as invalid because they were randomlyre-colored with the same color.MTE also carries signifi-cant run-time overhead.Memory tagging also speeds upMarkUs 1 and*Scan 3 approaches,providing strongtemporal safety.Secure by Design:Googles Perspective on Memory Safety9Production Language Saf
139、ety OverviewThis section provides a brief overview of the memory safetyproperties of current and near-future production languages atGoogle,and some languages that might play a role in a moredistant future.JVM languages(Java,Kotlin)In Java and Kotlin,memory-unsafe code is clearly delineatedand confin
140、ed to use of the Java Native Interface(JNI).JDKstandard libraries rely on a large number of native methodstoinvoke low-level system primitives and to use native librariese.g.for image parsing.The latter have been affected by mem-ory safety vulnerabilities(e.g.CESA-2006-004,Sun Alert1020226.1).Java i
141、s a type-safe language.The JVM ensures spatial safetythrough runtime bounds checks and temporal safety based on agarbage-collected heap.Java does not extend Safe Coding principles to concurrency:a well-typed program can have data races.However the JVMensures that data races cannot violate memory saf
142、ety.For exam-ple a data race can result in violation of higher-level invariantsand exceptions being thrown,but cannot result in memory cor-ruption.GoIn Go,memory-unsafe code is clearly delineated and confinedto code using packageunsafe(with the exception of memoryunsafety arising from data races,see
143、 below).Go is a type-safe language.The Go compiler ensures that allvalues are initialized by default with their types zero value,ensures spatial safety via run-time bounds checks,and tempo-ral safety via a garbage-collected heap.Except via packageunsafe,there is no facility to unsafely create pointe
144、rs.Go does not extend Safe Coding principles to concurrency:A well-typed Go program can have data races.Furthermore,data races can lead to violation of memory safety invariants10.RustIn Rust,memory-unsafe code is clearly delineated and confinedtounsafeblocks.Rust is a type-safe language.Safe Rustenf
145、orces that all values are initialized,and ensures spatial safetyby adding bounds checks where necessary.Dereferencing araw pointer is not allowed in safe Rust.Rust is the only mature,production-ready language that pro-vides temporal safety without run-time mechanisms such as10https:/ collection or u
146、niversally-applied refcounting,for largeclasses of code.Rust provides temporal safety through compile-time checks on the lifetimes of variables and references.The constraints imposed by the borrow checker preclude theimplementation of certain structures,in particular those involv-ing cyclic referenc
147、e graphs.The Rust standard library includesAPIs that allow such structures to be implemented safely,butwith runtime overhead(based on reference counting).In addition to memory safety,Rusts safe subset also guaran-tees data-race safety(“Fearless Concurrency”).Incidentally,data-race safety allows Rust
148、 to safely avoid unnecessary over-head when using runtime temporal safety mechanisms:BothRcandArcimplement reference-counted pointers.However,Rcs type precludes it from being shared across threads,soRccan safely use a cheaper,non-atomic counter.CarbonCarbon is an experimental successor language to C
149、+withthe explicit design goal to facilitate large-scale migration fromexisting C+codebases.As of 2023,details of Carbons safetystrategy are still in flux11.Carbon 0.2 plansto introduce asafe subset that provides rigorous memory safety guarantees.However,it will need to retain an effective migration
150、strategyfor existing unsafe C+code.Handling mixtures of unsafe andsafe Carbon code will need similar guard rails as with mixturesof C+and a safe language like Rust.While we expect newly-written Carbon to be in its memory-safe subset,Carbon that originated from a migration from ex-isting C+will likel
151、y rely on unsafe Carbon constructs.Weexpect an automated,large-scale subsequent migration fromunsafe to safe Carbon to be difficult and often impractical.Mit-igation of memory safety risk in the remaining unsafe codewill be based on hardening via build modes(similar to ourhandling of legacy C+code).
152、The hardened build modewill enable run-time mechanisms that attempt to prevent theexploitation of memory safety bugs.A Safer C+Given the large volume of pre-existing C+,we recognize that atransition to memory-safe languages might take decades,duringwhich we will be developing and deploying code cons
153、isting ofa mix of safe and unsafe languages.Consequently,we believeit is necessary to improve the safety of C+(or its successorlanguage if applicable).While defining a rigorously memory safe C+subset thatis sufficiently ergonomic and maintainable remains an openresearch question,it might in principl
154、e be possible to define a11The Safetysection of the Carbon Language design doc appears under“unfinished tales”.Secure by Design:Googles Perspective on Memory Safety10subset of C+that provides reasonably strong memory safetyguarantees.C+safety efforts should take an iterative and data-oriented approa
155、ch to defining a safer C+subset:identifyingthe top security and reliability risks,and deploying guaranteesand mitigations with the highest impact and ROI.A stepping stone for an incremental transitionA safer C+subset would provide a stepping stone towards atransition to memory-safe languages.For exa
156、mple,enforcingdefinite initialization or disallowing pointer arithmetic in a C+codebase will simplify an eventual migration to Rust or safeCarbon.Similarly,adding lifetimes to C+will improve inter-operability with Rust.Consequently,in addition to targetingtop risks,C+safety investments should priori
157、tize the improve-ments that will also accelerate and simplify an incrementaladoption of memory-safe languages.In particular,safe,performant and ergonomic interoperabilityis a key ingredient for an incremental transition to memorysafety.Both Android and Apple are following a transitionstrategy center
158、ed around interoperability,with Rust 9,20 andSwift 15,14 respectively.For this,we need improved interoperability tooling,and im-proved support of mixed-language code bases in existing buildtooling.12In particular,the existing production-quality interop-erability tooling for C+/Rust assumes a narrow
159、API surface.This has been sufficient for some ecosystems,like Android,but other ecosystems have additional requirements.Higherfidelity interoperability enables incremental adoption in addi-tional ecosystems,as done for Swiftalready,and explored forRust in Crubit.For Rust,there remains open questions
160、,likehow to guarantee that C+code does not violate Rust codesexclusivity rule,which would create new forms of undefinedbehaviors.By replacing components one-by-one,security improve-ments are delivered continuously instead of all at once at theend of a long rewrite.Note that a full rewrite may eventu
161、allybe achieved with this incremental strategy,but without the riskstypically associated with complete rewrites of large systems.Indeed,during that time,the system remains a single code base,continuously tested and shippable.MTEMemory Tagging 18 is a CPU feature,available in ARMv8.5a,that allows mem
162、ory regions and pointers to be taggedwith one of 16 tags.When enabled,dereferencing a pointerwith a mis-matching tag raises a fault.12Google has recently announced a$1M grantto support interop improve-ments.Multiple security features can be built on top of MTE,forinstance:Use-after-free and out-of-b
163、ounds detection.Whenmemory is deallocated(or reallocated),it is randomlyre-tagged.This implicitly invalidates remaining pointers,which would still have the“old”tag.In practice,the setof tags is small(16).Thus it provides probabilistic mitiga-tion rather than true safety,as there is a non-trivial cha
164、nce(6.25%)that dangling pointers are not marked as invalid(because they were randomly re-tagged with the same tag).Similarly,this can also detect out-of-bounds bugsprobabilistically.This can deterministically detect inter-allocation lin-ear overflows,assuming the allocator ensures thatconsecutive al
165、locations never share the same tag.It may be possible to build a deterministic heap use-after-free prevention on top of MTE using an addi-tional GC-like scan like MarkUs.Sampled use-after-free and out-of-bounds detection.The same as above,but only on a fraction of allocationsto reduce runtime overhe
166、ad sufficiently for broad deploy-ment.With sampled MTE,exploits are expected to succeed aftera few attempts:attacks wont be stopped.However,failedattempts generate noise(i.e.MTE crashes)we can inspect.Using those two techniques,MTE can result in:Bugs being found sooner in the SDLC.Unsampled MTEshoul
167、d be cheap enough to deploy in presubmit and ca-naries.More bugs being detected in production.Sampled MTEpermits 3 orders of magnitude higher sampling rate com-pared to GWP-ASan at the same cost.Actionable crash reports.Synchronous MTE reportswhere the bug happened,instead of crashing due to hard-to
168、-root-cause secondary effects of a bug.In addition,sam-pled MTE can be combined with heap instrumentation toprovide bug reports with similar fidelity to GWP-ASan Improved reliability and security as those bugs get fixed.A decrease in exploits ROI for attackers.Attackers eitherneed to find additional
169、 vulnerabilities to deterministicallybypass MTE,or risk detection.Defenders reaction speed will depend on their abil-ity to distinguish exploitation attempts from otherMTE violations.Exploitation attempts may be ableto hide in the noise of MTE violations happeningorganically.Secure by Design:Googles
170、 Perspective on Memory Safety11Even without the ability to distinguish exploitationattempts from organic MTE violations,MTE shouldreduce the exploitation window,i.e.how often andhow long an attacker can reuse a given exploit.Thefaster MTE violations are fixed,the shorter the ex-ploitation window wil
171、l be,which decreases the ROIof exploits.This highlights the importance of fixing MTE viola-tions promptly to achieve MTEs security potential.To do so without overwhelming developers,MTEshould be combined with proactive work to reducethe volume of bugs.Unsampled MTE may also be deployed as an exploit
172、 mitiga-tion,deterministically protecting against 10%-15%of memorysafety bugs(assuming no GC-like scan).However,due to non-trivial memory and runtime overhead,we expect productiondeployments to primarily be in small-footprint,but security-critical workloads.Despite its limitations,we believe MTE is
173、a promising pathto decrease the volume of temporal safety bugs in large existingC+code bases.There are currently no alternatives for C+temporal safety that can be realistically deployed at scale.CHERICHERI 21 is an intriguing research project that has the poten-tial to provide rigorous memory safety
174、 guarantees for legacyC+code(and perhaps Carbon in hardened mode),with mini-mal porting effort.CHERI temporal safety guarantees rely onquarantining of deallocated memory 8,and sweeping revo-cation,and it remains an open question whether the runtimeoverhead will be acceptable for production workloads
175、.Beyond memory safety,CHERI capabilities also enable ad-ditional interesting security mitigations,such as fine-grainedsandboxing.ConclusionAfter 50 years,memory safety bugs remain some of the moststubbornand most dangeroussoftware weaknesses.As oneof the leading causes of vulnerabilities,they contin
176、ue to resultin significant security risk.It has become increasingly clearthat memory safety is a necessary property of safe software.Consequently,we expect the industry to accelerate the ongoingshift towards memory safety in the coming decade.We areencouraged by the progress already made at Google,a
177、nd atother large software manufacturers.We believe that a Secure-by-Design approach is required forhigh assurance memory safety,which requires adoption of lan-guages with rigorous memory safety guarantees.Given the longtimeline involved in a transition to memory-safety languages,itis also necessary
178、to improve the safety of existing C and C+code bases to the extent possible,through the elimination ofvulnerability classes.AcknowledgmentsWe would like to thank our colleagues Chandler Carruth,Kostya Serebryany,Kinuko Yasuda,Jon McCune,ManuelKlimek and Mark Brand for their helpful comments and cont
179、ri-butions to this paper.References1S.Ainsworth and T.M.Jones.MarkUs:Drop-in use-after-free preventionfor low-level languages.In 2020 IEEE Symposium on Security andPrivacy(SP),pages 578591.IEEE,2020.2J.P.Anderson.Computer Security Technology Planning Study.TechnicalReport ESD-TR-73-51,U.S.Air Force
180、Electronic Systems Division,101972.URLhttps:/seclab.cs.ucdavis.edu/projects/history/papers/ande72.pdf.3M.L.Anton Bikineev and H.Payer.Retrofitting Temporal MemorySafety on C+.https:/ the next generation of XNU memory safety:kalloc_type.https:/ T.Chen.Security analysis ofmemory tagging.https:/ and L.
181、A.Barroso.The tail at scale.Communications of theACM,56:7480,2013.URLhttp:/cacm.acm.org/magazines/2013/2/160173-the-tail-at-scale/fulltext.7K.Deus,J.Galenson,B.Lau,I.Lozano,and A.S.P.Team.Data drivensecurity hardening in Android.https:/ Markettos,A.Mazzinghi,R.Norton,M.Roe,P.Sewell,S.Son,T.M.Jones,S
182、.Moore,P.G.Neumann,and R.N.M.Watson.Cornucopia:Temporal safety forCHERI heaps.In IEEE Symposium on Security and Privacy.IEEE,May2020.9J.Galenson,M.Maurer,and A.Team.Rust/C+interop in the Androidplatform.https:/ and P.Zero.0day in the wild.https:/ Technical People:Innovation,Teamwork,and the Software
183、 Process.1997.ISBN 9788177582710.12C.Kern.Developer ecosystems for software safety.ACM Queue,Jan/Feb2024.URL https:/doi.acm.org/10.1145/3648601.To appear.13R.Levien.The Soundness Pledge.https:/raphlinus.github.io/rust/2020/01/18/soundness-pledge.html,2020.Accessed:2023-12-06.Secure by Design:Googles
184、 Perspective on Memory Safety1214K.Malawski.Swift as C+successor in FoundationDB.Strange Loop,2023.Accessed:2023-12-06.15J.McCall.Introducing a memory-safe successor language in large C+code bases.CppNow,2023.Accessed:2023-12-06.16MSRC.A proactive approach to more secure code.https:/ safety.https:/w
185、ww.chromium.org/home/chromium-security/memory-safety/.Accessed:2023-12-06.18K.Serebryany.ARM memory tagging extension and how it improvesC/C+memory safety.Login USENIX Mag,44(5),2019.19K.Serebryany,C.Kennelly,M.Phillips,M.Denton,M.Elver,A.Potapenko,M.Morehouse,V.Tsyrklevich,C.Holler,J.Lettner,D.Kilz
186、er,and L.Brandt.GWP-ASan:Sampling-based detection ofmemory-safety bugs in production,2023.20J.V.Stoep.Memory safe languages in Android 13.https:/ M.Roe.The CHERIcapability model:Revisiting RISC in an age of risk.ACM SIGARCHComputer Architecture News,42(3):457468,2014.Alex Rebert is a Senior Staff So
187、ftware Engineer in Googles Security Foun-dation Team.His primary focus is on reducing memory safety risks.Christoph Kern is a Principal Software Engineer in Googles Security Foun-dation Team.His primary focus is on developing scalable,principled ap-proaches to software security.Secure by Design:Googles Perspective on Memory Safety13