上海品茶

Applause:2024测试生成式人工智能:降低风险和最大化机会报告(英文版)(18页).pdf

编号:166776 PDF  中文版   DOC 18页 2.85MB 下载积分:VIP专享
下载报告请您先登录!

Applause:2024测试生成式人工智能:降低风险和最大化机会报告(英文版)(18页).pdf

1、TESTING GENERATIVE AI:MITIGATING RISKS AND MAXIMIZING OPPORTUNITIESEBOOKPage 2 of 18 SECTION TITLE HERE1The Basics of Generative AIA novel technology that creates content based on user requests known as prompts,Generative AI(Gen AI)applications can produce new text,images,video and audio by synthesi

2、zing,summarizing,or generating content.While users interact with Gen AI much like they do a search engine,the way the two technologies generate responses is very different.Whereas search engines retrieve information in the format in which it is stored,Gen AI analyzes huge amounts of human-generated

3、data,known as training data,to learn how to respond to user requests in a way that provides value.The analysis is carried out by a large language model(LLM),a type of neural network.Over time,humans must continue to provide LLMs with fresh training data to fine-tune the model and keep the informatio

4、n it learns from up to date.Gen AI can automate and enhance any number of business tasks,ranging from drafting emails and generating financial reports to creating marketing content and analyzing customer interactions and use cases are expanding daily.Today,the technology is largely used to improve p

5、roductivity and personalize user experiences.Popular Gen AI applications include ChatGPT(OpenAI),Gemini(Google)and Copilot(Microsoft).While each application currently mainly offers text-based inputs,voice interaction is on the rise.Midjourney,Stable Diffusion and DALL-E are popular applications in t

6、he image generation category.UNDERSTANDING GENERATIVE AI1Page 3 of 18 Testing Generative AI:Mitigating Risks and Maximizing OpportunitiesKey terms explainedLarge language models(LLMs)AI foundation models,including large language models(LLMs)and diffusion models,power the vast majority of Gen AI appl

7、ications.These innovations represent a revolutionary development in the history of humanity because they automate tasks that have historically relied entirely on human effort,such as text generation,summarization and analysis.Just as the Industrial Revolution automated labor,the generative AI revolu

8、tion is automating intelligence.In fact,it is likely that the current era will be referred to as the intelligence revolution by future generations.LLMs are widely used in applications that interpret,transform or generate text.However,they are also employed in use cases that require image,audio,and c

9、ontent generation.In addition,AI models that generate visual and audio content often employ LLMs to interpret the users input request and are sometimes used to evaluate prompt inputs or outputs to respond to user questions.It is worth clarifying that an LLM is not an application.It is a technology t

10、hat supports an application.For example,ChatGPT is an application powered by a model called GPT-4o.Claude,a text-based Gen AI application,is powered by the LLMs Haiku,Sonnet and Opus.Meta AI,which forms part of the Facebook,Instagram and WhatsApp applications,is based on the LLM Lama 3.Multimodal LL

11、MsMultimodal is a term employed to describe LLMs that can process information provided by users in multiple content modes,such as text,images,diagrams and videos.They can also generate new content in these modes.The leading multimodal models can analyze and generate various content modes as part of

12、a single prompt.How Generative AI Differs from Traditional AIGenerative AI is often confused with two other AI technologies:traditional machine learning and natural language processing.Machine learning(ML)is typically employed to analyze large quantities of data and identify patterns,anomalies or hi

13、dden insights.While Gen AI also employs ML,it is mainly used to generate or transform content,rather than analyze it.Unlike traditional ML,which often functions asynchronously for data analysis,Gen AI typically operates as a runtime technology.Natural language processing(NLP)is most commonly used in

14、 voice assistants and chatbots.It is used to interpret human speech or text(such as when a user verbally asks a voice assistant like Amazon Alexa a question)and then match their question with a predefined answer.While NLP can process unbounded user input,it can only provide pre-determined outputs.Th

15、is makes it less versatile than Gen AI for responding to information-related user intents.Page 4 of 18 Testing Generative AI:Mitigating Risks and Maximizing OpportunitiesAdam Cheyer,the creator of the technology behind Apples Siri,said in a podcast that NLP-based solutions like Siri were optimized a

16、s doing assistants,like playing music or opening a users calendar.By contrast,many generative AI applications are knowing assistants.They excel at information-based queries and creative requests but are not as advanced in consistently executing procedural requests.Examples of NLP,Machine Learning an

17、d Gen AINatural Learning Processing Alexa Google Assistant Siri Bank of Americas Erica Most website chatbotsTraditional Machine Learning Proprietary business analytics Forecasting models Not typically used by consumers but sometimes exposed through algorithmsGenerative AI ChatGPT Microsoft Copilot G

18、oogle Gemini Meta AI Midjourney Stable DiffusionPage 5 of 18 SECTION TITLE HERESeveral current trends characterize Gen AI adoption today and where its headed.They are explored below.Three use cases are by far the most popularTo understand Gen AI,you need to consider its application for consumers and

19、 enterprises alike.Consumers are interested in using it for entertainment purposes and for assistance with day-to-day tasks,while enterprises are focused on leveraging it to boost staff productivity.All other use cases come a distant second in terms of priority.Text-driven inputs and outputs are lea

20、ding the wayWhile Gen AI can generate different types of content,from images to videos and audio,most value today comes from text.Historically,unstructured data in text form was often more of a burden than an asset to enterprises.The only way to create,manage,use and harvest value from it has been t

21、o assign staff to sift through the pages,which costs considerable time and resources.Gen AI is,therefore,particularly valuable for industries,company departments and use cases that create a lot of text or have a lot of text they need to analyze or transform.Internal solutions are popular,but externa

22、l holds most valueMany companies want to use Gen AI internally within their organization,for example a Gen-AI-powered productivity tool,as a means to test and learn more about the technology before introducing it into customer-facing scenarios(where risks may be higher).However,there is widespread r

23、ecognition that externally-facing generative AI solutions are likely to drive a more significant impact.Most benefit from Gen AI is anticipated to come from leveraging it for customer support,which has traditionally been a significant cost center for companies,and integrating it into core products t

24、o add customer value.GENERATIVE AI TRENDS2Page 6 of 18 Testing Generative AI:Mitigating Risks and Maximizing OpportunitiesBuild a new product,or just a feature?We all know that companies across industries are racing to integrate Gen AI into their product offering.They have a choice:create an entirel

25、y new Gen-AI-powered product,or integrate Gen AI capabilities into an existing product.ChatGPT,Gemini Advanced and Microsoft Copilot are examples of Gen AI applications that can be used in either scenario as they serve a broad base of users and businesses.Most companies are currently choosing to int

26、egrate Gen AI into their existing product suite,particularly in the areas of customer relationship management,HR knowledge bases,customer contact centers,and sales and marketing support.Narrow domain solutions are the futureIn the months after ChatGPTs introduction,companies released a number of gen

27、eral-purpose solutions.Microsoft Copilot,Google Bard(now Gemini),Anthropics Claude,Pi from Inflection AI,and ChatGPT are all broad-domain generative AI applications that can provide helpful responses to most knowledge-based questions.However,narrow domain solutions are now becoming more common.Perpl

28、exity is an example of a Gen AI application that could be applied to general-purpose tasks,yet is tightly focused on search use cases.Domain-focused applications will soon take the lead as the fastest-growing segment in generative AI.Page 7 of 18 SECTION TITLE HERE1GENERATIVE AI OPPORTUNITIES AND RI

29、SKS3Opportunities There are four key opportunities of Gen AI:1.Hyper-automation(available today):accelerating the completion of knowledge-based and high-skill tasks that rely on human-level common sense or expertise.It is already widely adopted.Research,content summarization and content generation a

30、re common use cases across user segments,while businesses are applying the technology to customer support,employee knowledge management,marketing content and report writing.2.Hyper-creation(available today):streamlines the generation of creative works,enabling more iterations to be completed within

31、a given time period.It remains in its infancy,despite being widely employed.Enterprises are using Gen AI on an ad hoc basis during ideation stages,and consumers are employing the technology for fun.3.Hyper-personalization(future promise):The ability of generative AI to create novel content at scale

32、means that personalized experiences and mass customization will become more practical.It is not currently meaningfully employed and is likely to take several years to develop.While it may become an important trend,there are technical,computational,organizational,legal and application barriers to dep

33、loying it broadly today.4.Convenience(future promise):The technology era of the past twenty years has been characterized by ever-increasing convenience,like faster communication and connectivity.It is a promising yet quieter opportunity presented by Gen AI.Even if convenience isnt the primary motiva

34、tion behind user adoption of Gen AI,it will be the reason most of them continue using it.Page 8 of 18 Testing Generative AI:Mitigating Risks and Maximizing OpportunitiesRisksGen AI also introduces novel risks.There are four risk categories:inaccuracy,bias,safety and security.Explore how these risks

35、are introduced into Gen AI systems.Gen AI sometimes produces inaccurate responses because it is a probabilistic technology.In the past,technologies like search engines drew responses almost entirely from databases or systems with curated and approved content.While Gen AI produces responses that are

36、based on existing data,the responses are always novel.This makes Gen AI more versatile than technologies like search,but can also introduce risks like inaccuracy,bias,safety or security issues.The quality of data used to train Gen AI systems is another risk factor.All AI models are influenced by the

37、 data used to train them,which means that inaccurate,biased,or unsafe information contained in the training data could show up in a user response.There are techniques to mitigate this impact,such as retrieval augmented generation(RAG),or limiting responses to pulling information from approved data s

38、ources.The new access points(i.e.,application surface area)and the design of Gen AI solutions to be helpful assistants introduce another element of risk that typically manifests as security concerns or introduces AI safety issues.Malicious actors will employ prompt injection techniques to compromise

39、 access points to conversational Gen AI systems.This may enable them to exfiltrate proprietary or sensitive information or modify how the models operate.In addition,the intentional design of the Gen AI assistants to be helpful can make it easier to compromise the system,as they will often default to

40、 attempting to fulfill the request.Gen AI Risks and CausesInaccuracyCaused by the probabilistic nature of Gen AIBiasCaused by the probabilistic nature of Gen AI+training dataSafetyCaused by the probabilistic nature of Gen AI+training dataSecurityCaused by the probabilistic nature of Gen AI+training

41、data+new application surface areasPage 9 of 18 SECTION TITLE HERE1METHODOLOGIES FOR EFFECTIVE GENERATIVE AI TESTING4Incorporating human insight into testing AI solutions is essential to ensure their reliability,ethical integrity and suitability for real-world applications.Solution developers have le

42、arned that,as AI technologies become increasingly sophisticated,simply relying on automated testing is insufficient.Human-in-the-loop methodologies,particularly through red teaming,play a crucial role in this context.Blended approaches that include both automated and human testers enable product tea

43、ms to anticipate potential failures,understand diverse user perspectives and refine AI behavior to align with human values and expectations.So,how do human feedback and adversarial testing impact the development and deployment of generative AI systems?Humans in the LoopHuman feedback must be integra

44、ted throughout the AI model development lifecycle.Human involvement is essential for gathering initial training data,providing nuanced feedback during model training stages and evaluating model behavior.This human-in-the-loop approach helps developers to refine AI responses to better meet human expe

45、ctations and address complex ethical issues.For instance,engaging with real customers before releasing an application can uncover unique insights that may not be evident during isolated development phases.This proactive engagement ensures that the AIs outputs are both practical and beneficial in rea

46、l-world applications.Page 10 of 18 Testing Generative AI:Mitigating Risks and Maximizing OpportunitiesRed TeamingRed teaming,traditionally used in cybersecurity,is employed in AI testing to identify vulnerabilities that could be exploited maliciously.The method helps surface a wide range of output-r

47、elated issues such as inaccuracies,unsafe content,or hallucinations issues that automated tests often do not catch.Human testers can challenge AI models through a systematic adversarial approach,in which they attempt to uncover latent problems.Using this information,developers can fortify the models

48、 against potential failure points.Diversity in Red TeamingDiverse perspectives are vital in red teaming because they help ensure that AI systems are tested against a broad range of scenarios and user interactions.By involving testers from varied demographics,companies can better understand how diffe

49、rent user groups might perceive and interact with an AI.Typically absent from automated testing,involving diverse perspectives mitigates risks related to AI safety and ethics while also enhancing the overall robustness of AI applications before they reach the public.Domain Expertise in Red TeamingIn

50、corporating specialists with deep knowledge of specific domains or industries is crucial for red teaming in AI to ensure that responses are not only correct but also contextually appropriate.These experts assess AI outputs for accuracy and relevance within fields such as law,medicine or technology.F

51、or example,legal experts are required to test AI systems designed to generate or review text in legal documents in order to ensure the outputs adhere to current laws and practices.User Experience in Red TeamingThe user experience aspect of red teaming focuses on how real users interact with an AI ap

52、plication,evaluating factors such as ease of use,satisfaction and engagement.Testers simulate typical user behaviors and responses to identify any UX issues that could impair usability or deter user adoption.Developers use this feedback to refine the AI to ensure it meets user expectations.Scale in

53、Red TeamingScaling red teaming efforts is essential as AI applications grow in complexity and reach.Large-scale testing involves a significant number of diverse testers to ensure that AI systems perform well across different regions,cultures and user demographics.It is also critical to identify long

54、-tail issues that arise infrequently and can have negative impacts but are only uncovered when testing systems at scale.Page 11 of 18 SECTION TITLE HERECHALLENGES AND STRATEGIES IN GENERATIVE AI TESTING5Common PitfallsCommon pitfalls in Gen AI testing are often related to the lack of attention given

55、 to red teaming principles.Developers tend to place most emphasis on testing the accuracy of Gen AI models because this is the easiest to test for;very often,accuracy testing is a matter of matching outputs with known company data.However,this is not always the case.The required depth of domain expe

56、rtise is high and many developers overlook experience-based subjective assessments.It is also difficult to discover long-tail issues across AI safety,bias and security.Challenges facing developers include:Domain Expertise:You need experts to validate the accuracy of content related to specific knowl

57、edge domains.Without subject matter experts,it is hard to assess whether model outputs are accurate and relevant in various contexts.For instance,medical experts are crucial for evaluating AI-generated health information,ensuring it adheres to current medical standards and practices,whereas security

58、 experts are needed to identify system vulnerabilities.Diversity:Failing to represent all user demographics and scenarios adequately can lead to biased or incomplete assessments of Gen AI outputs.Without diverse tester teams,evaluations may overlook instances where outputs favor certain groups,exhib

59、it biases or toxicity towards specific communities,lack cultural context,or fail to cater to users with disabilities.This can result in outputs that do not serve all user groups equally.Geography:Different regions have distinct cultural norms,language nuances and regulatory frameworks that can impac

60、t the expected or required performance of Gen AI models.Ignoring geographical considerations can result in models that are not suitable for specific markets,regions,or user groups,limiting their effectiveness and undermining adoption .Page 12 of 18 Testing Generative AI:Mitigating Risks and Maximizi

61、ng OpportunitiesScale:Testing AI models at scale is crucial to ensure their robustness and scalability in real-world deployment scenarios.Failure to assess performance at scale can result in unforeseen challenges and inefficiencies when the model is deployed in production environments with large vol

62、umes of data or users .It also creates the risk that long-tail,less frequent issues are not surfaced in testing and later make it into production.Risk MitigationTesting with Expert,Diverse Tester Teams at Scale:Domain experts are needed to verify the accuracy and relevance of outputs,ensuring that t

63、hese meet the required standards and expectations in specific fields.Diverse tester teams help uncover biases and ensure the model works effectively across different demographics and use cases.Conducting tests at scale can reveal performance bottlenecks,infrequent errors and other issues that may no

64、t become apparent in smaller-scale tests .Regression Testing and Auditing:As Gen AI models evolve and are updated,regression testing becomes vital to ensure that new versions do not introduce errors or degrade performance.Regular or continuous auditing helps maintain the quality and integrity of the

65、 model outputs over time,ensuring that updates or other changes do not negatively impact model behavior or the user experience.Legal,Security and Regulatory Considerations:Gen AI models must comply with legal and regulatory frameworks,particularly regarding data privacy and security.Testing should i

66、nclude checks for compliance with relevant laws and regulations to avoid potential legal issues and ensure the protection of sensitive information.This includes adherence to GDPR,CCPA and other relevant regulations .Inaccurate Information:Inaccuracy in Gen AI outputs is often the result of biases in

67、 training data or probabilistic errors(often referred to as hallucinations).Implementing rigorous validation processes,using techniques like retrieval-augmented generation(RAG)and fine-tuning models can help mitigate the risk of generating inaccurate information.Developers must also ensure that outp

68、uts are cross-verified against reliable data sources and expert reviewersSecurity Exposure:Gen AI systems can introduce new security vulnerabilities,such as exposing sensitive data or enterprise susceptibility to adversarial attacks.Security testing should be a critical component of the AI testing s

69、trategy,focusing on identifying vulnerabilities and mitigating potential security risks.This includes testing for data leakage,ensuring secure data handling and implementing robust access controls .Non-Discrimination:Developers must ensure Gen AI models do not produce discriminatory outputs.This inv

70、olves testing with diverse datasets and scenarios to uncover and address any biases that may exist in the model or source datasets.It is important to implement fairness checks and balance training data to reflect diverse populations accurately .Page 13 of 18 SECTION TITLE HERECASE STUDIES6There are

71、multiple approaches to Gen AI testing.Here are some case studies from Applauses collaboration with various international organizations:1.Involving Human Perspectives with Prompt Response GradingA leading software company had recently developed an AI chatbot to integrate into their financial services

72、 products.As is becoming increasingly common practice,they built the chatbot on top of an open-source LLM and trained it using their own customer data.Before launching the chatbot to the public,the company wanted to test the chatbot to identify potential weaknesses in the model and fine tune it to b

73、etter meet user needs.To achieve this,it wanted to use a technique called reinforcement learning from human feedback(RLHF),which helps LLMs to understand human goals,wants and needs so they can better serve them.However,the company had limited internal expertise in this area and no way to involve re

74、al users in testing.So,the company decided to engage multiple external providers to deliver pilot projects.Ultimately,the company selected Applause as a strategic partner for RLHF and immediately started to evaluate thousands of chatbot responses on a weekly basis.Applause put together a team of tes

75、ters from various locales and backgrounds to ensure the model was scrutinized by diverse perspectives.In a practice known as prompt response grading,the testers graded the accuracy of each chatbot response on a scale of one to five.In addition,they flagged harmful responses and tagged them by catego

76、ries such as biased,toxic,private or adult content.Once they knew when and under which circumstances harmful responses occurred,the company was able to fine tune the model to mitigate issues.It successfully reduced safety concerns related to bias,toxicity,inaccuracies and hallucinations,as well as i

77、ssues that Testing Generative AI:Mitigating Risks and Maximizing Opportunitieslimited the value of responses for customers.For example,testers found that the chatbot was unable to interpret prompts containing idioms,leading to confusing responses.The company continues to work with Applause to test a

78、nd retrain the model in a continuous feedback loop.2.Executing Adversarial Testing With Red TeamingOne of the worlds largest technology companies wanted to gauge how well its chatbot could handle adversarial prompts,ie.prompts that actively seek to elicit offensive responses from an AI chatbot.To do

79、 this,it wanted to use red teaming.In the case of this company,the developer team needed to evaluate how well the chatbot responded to adversarial prompts relating to chemical and biological attacks.To execute red teaming effectively,the developers needed huge amounts of human-generated adversarial

80、prompts as well as examples of good and bad responses.This way,it could train the model to recognise or preempt potentially dangerous usage and respond appropriately.Applause worked with the company to source a team of highly specialized testers with rare scientific domain knowledge and a thorough u

81、nderstanding of complex scientific principles.This enabled the test team to generate large test datasets of offensive prompts,complete with prompt tags,so that the model could recognize different types of dangerous content.They also generated training datasets of exemplary responses.Through leveragi

82、ng these datasets,the company introduced guardrails into the model to protect users from potentially harmful content.It also incorporated red teaming into its operational planning to continue to assess how the model responds to different,potentially dangerous scenarios over time.Page 15 of 18 Testin

83、g Generative AI:Mitigating Risks and Maximizing Opportunities3.Testing Pre-launch With a Trusted Tester ProgramBefore launching a consumer chatbot to millions of users,a global high-tech company wanted to put its reliability and robustness to the test.It also wanted to make sure it mitigated the cha

84、nce of biased,toxic and inaccurate content as much as possible.Together with Applause,the company initiated a trusted tester program to thoroughly stress test the chatbot under real-world conditions.Applause meticulously recruited,vetted and managed a diverse team of 10,000 testers from 6 different

85、countries that was responsible for evaluating the chatbots performance across various cultural and linguistic contexts.The testers worked in tandem with a dedicated trusted tester program team,also established by Applause,which reported and triaged bugs reported by testers on a daily basis.The testi

86、ng phase spanned four weeks,during which testers generated thousands of prompts spanning an array of topics.The testers also provided qualitative feedback that was aggregated by location to provide richer context that could guide improvements to the model.By the end of the testing phases,Applause ha

87、d successfully delivered 100%of all testing scenarios.The intensive testing process significantly improved the models accuracy and performance,resulting in greater user satisfaction:over the four-week period,the customers Net Promoter Score(NPS)increased by more than four points.The customer launche

88、d its chatbot on time,confident in the quality of its chatbots responses.Page 16 of 18 SECTION TITLE HEREFUTURE DIRECTION7Generative AI Testing and the Critical Role of Humans in the LoopGen AI technology is advancing rapidly and with it the AI testing landscape.As models become more sophisticated,t

89、esting them becomes more complicated.As a result,developers must adapt and improve their testing methodologies.While Gen AI models promise to deliver more automation and convenience,humans still play a critical role,as our oversight ensures AI systems function as intended and adhere to company stand

90、ards.As AI continues to evolve,our role will expand to include more sophisticated oversight,ethical evaluations and management.The Importance of Maintaining Human OversightHumans must ensure that AI outputs are fair and unbiased,do not perpetuate harmful stereotypes or misinformation,respect user pr

91、ivacy and ensure data security.Real people are needed to interpret and evaluate AI outputs to ensure they meet societal norms,align with cultural values and meet company objectives.Humans also help to prevent the misuse of AI technologies.Anticipated Developments in Human-AI Collaboration for Testin

92、gFuture developments in Gen AI will likely see increased collaboration between humans and AI in testing processes.While AI will assist humans to identify testing scenarios,automate repeatable testing procedures,analyze test results and suggest improvements,humans will provide the nuanced understandi

93、ng and judgment that AI lacks.Page 17 of 18 Testing Generative AI:Mitigating Risks and Maximizing OpportunitiesAdvances in AI-augmented testing tools will support developers to simulate a wide range of scenarios,identify potential issues more effectively and cover a greater number of edge cases and

94、long-tail scenarios.Future AI systems will also incorporate continuous learning mechanisms,where feedback from testing and real-world deployment is constantly used to improve model performance.This iterative approach ensures that AI systems evolve and adapt over time,maintaining high standards of pe

95、rformance and compliance.ConclusionGenerative AI offers significant opportunities beyond hyper-automation,hyper-creation,hyper-personalization and convenience.Such opportunities are driving Gen AI adoption across all industries,from business process automation to creative content generation.As the t

96、echnology progresses,the industry will start to produce more innovative and nuanced applications.However,the deployment of Gen AI also introduces novel risks,including inaccuracy,bias,safety concerns and security vulnerabilities.These risks stem from the probabilistic nature of AI,the influence of t

97、raining data and the new application surface areas that AI technologies expose to adversarial attacks.To navigate these challenges and capitalize on the opportunities,human feedback is a critical success factor.Human insights help ensure that AI systems remain accurate,fair and secure.Diverse tester

98、 teams ensure model outputs are unbiased,culturally relevant and inclusive across different demographics and use cases.The collaboration between humans and AI will become increasingly sophisticated,with continuous learning systems and AI-augmented testing tools playing pivotal roles in refining AI m

99、odels.Machines need human feedback so they can serve humans better.Despite continued advancements in AI technology,the role of humans in guiding,supervising and improving AI systems remains indispensable.Human insight,judgment and contextual understanding are critical to ensuring that AI systems are

100、 effective,reliable and aligned with human values.By embracing a collaborative approach to AI development and testing,we can harness the full potential of Gen AI while mitigating risks and ensuring that it serves the greater good.About ApplauseApplause is the world leader in testing and digital qual

101、ity.Brands today win or lose customers through digital interactions,and Applause alone can deliver authentic feedback on the quality of digital assets and experiences,provided by real users in real-world settings.Our disruptive approach harnesses the power of the Applause platform and leverages a co

102、mmunity of more than one million digital experts worldwide.Unlike traditional testing methods(including lab-based and offshoring),we respond with the speed,scale and flexibility that digital-focused brands require and expect.Applause provides insightful,actionable testing results that can directly i

103、nform go/no go release decisions,helping development teams build better and faster,and release with confidence.Thousands of digital-first brands including Ford,Google,Western Union and Dow Jones rely on Applause as a best practice to deliver the digital experiences their customers love.Learn more at NORTH AMERICA100 Pennsylvania Avenue Framingham,MA 01701 1.844.300.2777EUROPEObentrautstr.72 10963 Berlin,Germany+49.30.57700400ISRAEL10 HaMenofim Street Herzliya,Israel 4672561+972.74.757.1300

友情提示

1、下载报告失败解决办法
2、PDF文件下载后,可能会被浏览器默认打开,此种情况可以点击浏览器菜单,保存网页到桌面,就可以正常下载了。
3、本站不支持迅雷下载,请使用电脑自带的IE浏览器,或者360浏览器、谷歌浏览器下载即可。
4、本站报告下载后的文档和图纸-无水印,预览文档经过压缩,下载后原文更清晰。

本文(Applause:2024测试生成式人工智能:降低风险和最大化机会报告(英文版)(18页).pdf)为本站 (白日梦派对) 主动上传,三个皮匠报告文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知三个皮匠报告文库(点击联系客服),我们立即给予删除!

温馨提示:如果因为网速或其他原因下载失败请重新下载,重复下载不扣分。
客服
商务合作
小程序
服务号
会员动态
会员动态 会员动态:

wei**n_... 升级为高级VIP   187**11... 升级为至尊VIP

 189**10... 升级为至尊VIP  188**51...   升级为高级VIP

134**52...  升级为至尊VIP  134**52... 升级为标准VIP

wei**n_... 升级为高级VIP 学**... 升级为标准VIP 

 liv**vi...  升级为至尊VIP 大婷 升级为至尊VIP 

wei**n_... 升级为高级VIP  wei**n_...  升级为高级VIP 

微**... 升级为至尊VIP   微**... 升级为至尊VIP 

 wei**n_... 升级为至尊VIP  wei**n_... 升级为至尊VIP

wei**n_... 升级为至尊VIP   战** 升级为至尊VIP 

 玍子 升级为标准VIP   ken**81... 升级为标准VIP

185**71...  升级为标准VIP    wei**n_... 升级为标准VIP

微**...  升级为至尊VIP  wei**n_... 升级为至尊VIP 

 138**73... 升级为高级VIP 138**36... 升级为标准VIP  

138**56...  升级为标准VIP   wei**n_... 升级为至尊VIP 

wei**n_... 升级为标准VIP  137**86... 升级为高级VIP

 159**79... 升级为高级VIP   wei**n_... 升级为高级VIP 

139**22...  升级为至尊VIP 151**96... 升级为高级VIP 

wei**n_... 升级为至尊VIP   186**49...  升级为高级VIP

 187**87...  升级为高级VIP  wei**n_... 升级为高级VIP

wei**n_... 升级为至尊VIP  sha**01... 升级为至尊VIP 

wei**n_...  升级为高级VIP 139**62...  升级为标准VIP

  wei**n_... 升级为高级VIP 跟**... 升级为标准VIP  

182**26... 升级为高级VIP  wei**n_... 升级为高级VIP

136**44... 升级为高级VIP   136**89...  升级为标准VIP

wei**n_...  升级为至尊VIP wei**n_... 升级为至尊VIP

 wei**n_... 升级为至尊VIP wei**n_... 升级为高级VIP 

 wei**n_... 升级为高级VIP  177**45... 升级为至尊VIP

 wei**n_... 升级为至尊VIP    wei**n_... 升级为至尊VIP

微**...  升级为标准VIP   wei**n_... 升级为标准VIP 

wei**n_...  升级为标准VIP 139**16...  升级为至尊VIP

  wei**n_... 升级为标准VIP wei**n_... 升级为高级VIP 

 182**00... 升级为至尊VIP  wei**n_...  升级为高级VIP

 wei**n_...  升级为高级VIP wei**n_...  升级为标准VIP

133**67...   升级为至尊VIP  wei**n_... 升级为至尊VIP 

柯平  升级为高级VIP  shi**ey...  升级为高级VIP

 153**71... 升级为至尊VIP  132**42... 升级为高级VIP

 wei**n_... 升级为至尊VIP   178**35... 升级为至尊VIP

wei**n_... 升级为高级VIP  wei**n_...  升级为至尊VIP

 wei**n_... 升级为高级VIP  wei**n_... 升级为高级VIP 

133**95... 升级为高级VIP    188**50... 升级为高级VIP

138**47...  升级为高级VIP 187**70... 升级为高级VIP

 Tom**12... 升级为至尊VIP 微**... 升级为至尊VIP

wei**n_... 升级为至尊VIP   156**93... 升级为至尊VIP

 wei**n_...  升级为高级VIP  wei**n_... 升级为至尊VIP

wei**n_... 升级为标准VIP    小敏 升级为高级VIP

hak**a9... 升级为至尊VIP 185**56... 升级为高级VIP  

 156**93... 升级为标准VIP   wei**n_...  升级为至尊VIP

wei**n_... 升级为至尊VIP   Br**e有... 升级为至尊VIP

 wei**n_... 升级为标准VIP  wei**n_... 升级为高级VIP

 wei**n_...  升级为至尊VIP 156**20...  升级为至尊VIP