Google Certified Professional Cloud Architect 證照考古題大全

閱讀時間約 196 分鐘

谷歌雲端架構師證照題庫彙整 20241007

Google Cloud Platform(GCP 谷歌雲)全系列考古題,2024年最新題庫,持續更新,全網最完整。GCP 證照含金量高,自我進修、跨足雲端產業必備近期版本更新,隨時追蹤最新趨勢變化。

 

QUESTION 81

Case Study: 4 - Dress4Win case study

 

Company Overview

Dress4win is a web-based company that helps their users organize and manage their personal wardrobe using awebsite and mobile application. The company also cultivates an active social network that connects their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemiumapp model.

 

Company Background

Dress4win's application has grown from a few servers in the founder's garage to several hundred servers andappliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application's rapid growth. Because of this growth and the company's desire to innovate faster, Dress4win iscommitting to a full migration to a public cloud.

 

Solution Concept

For the first phase of their migration to the cloud, Dress4win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and whichcomponents they need to change before migrating them.

 

Existing Technical Environment

The Dress4win application is served out of a single data center location.

 

Databases:

MySQL - user data, inventory, static data Redis -metadata, social graph, caching

Application servers:

Tomcat - Java micro-services Nginx -static content


Apache Beam - Batch processing

 

Storage appliances:

iSCSI for VM hosts

Fiber channel SAN - MySQL databases NAS - image storage, logs, backups ApacheHadoop/Spark servers:

Data analysis

Real-time trending calculations MQservers:

Messaging

 

Social notifications Events

Miscellaneous servers:

Jenkins, monitoring, bastion hosts, security scanners

 

Business Requirements

Build a reliable and reproducible environment with scaled parity of production. Improve security by defining andadhering to a set of security and Identity and Access Management (IAM) best practices for cloud.

Improve business agility and speed of innovation through rapid provisioning of new resources. Analyze and optimize architecture for performance in the cloud. Migrate fully to the cloud if all other requirements are met.

 

Technical Requirements

Evaluate and choose an automation framework for provisioning resources in cloud. Support failover of the production environment to cloud during an emergency. Identify production services that can migrate to cloud to save capacity.

Use managed services whenever possible. Encryptdata on the wire and at rest.

Support multiple VPN connections between the production data center and cloud environment.

 

CEO Statement

Our investors are concerned about our ability to scale and contain costs with our current infrastructure. Theyare also concerned that a new competitor could use a public cloud platform to offset their up-front investmentand freeing them to focus on developing better features.

 

CTO Statement

We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks for new gear to be racked before we can start new projects. Ourtraffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sittingidle.

 

CFO Statement

Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost ofownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50% lower than our currentmodel.

 

For this question, refer to the Dress4Win case study. Dress4Win would like to become familiar with deployingapplications to the cloud by successfully deploying some applications quickly, as is. They have asked for yourrecommendation. What should you advise?

 

A.       Identify self-contained applications with external dependencies as a first move to the cloud.

B.       Identify enterprise applications with internal dependencies and recommend these as a first move to the cloud.


C.      Suggest moving their in-house databases to the cloud and continue serving requests to on- premiseapplications.

D.      Recommend moving their message queuing servers to the cloud and continue handling requests to on-premise applications.

 

Correct Answer: A

Section: (none)

 

QUESTION 82

Case Study: 4 - Dress4Win case study

 

Company Overview

Dress4win is a web-based company that helps their users organize and manage their personal wardrobe using awebsite and mobile application. The company also cultivates an active social network that connects their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemiumapp model.

 

Company Background

Dress4win's application has grown from a few servers in the founder's garage to several hundred servers andappliances in a colocated data center. However, the capacity of their infrastructure is now insufficient for the application's rapid growth. Because of this growth and the company's desire to innovate faster, Dress4win iscommitting to a full migration to a public cloud.

 

Solution Concept

For the first phase of their migration to the cloud, Dress4win is considering moving their development and test environments. They are also considering building a disaster recovery site, because their current infrastructure is at a single location. They are not sure which components of their architecture they can migrate as is and whichcomponents they need to change before migrating them.

 

Existing Technical Environment

The Dress4win application is served out of a single data center location.

 

Databases:

MySQL - user data, inventory, static data

Redis - metadata, social graph, cachingApplication servers:

Tomcat - Java micro-services Nginx -static content

Apache Beam - Batch processing

 

Storage appliances:

iSCSI for VM hosts

Fiber channel SAN - MySQL databases NAS - image storage, logs, backups ApacheHadoop/Spark servers:

Data analysis

Real-time trending calculations MQservers:

Messaging

 

Social notifications Events

Miscellaneous servers:


Jenkins, monitoring, bastion hosts, security scanners BusinessRequirements

Build a reliable and reproducible environment with scaled parity of production. Improve security by defining andadhering to a set of security and Identity and Access Management (IAM) best practices for cloud.

Improve business agility and speed of innovation through rapid provisioning of new resources. Analyze and optimize architecture for performance in the cloud. Migrate fully to the cloud if all other requirements are met.

 

Technical Requirements

Evaluate and choose an automation framework for provisioning resources in cloud. Support failover of the production environment to cloud during an emergency. Identify production services that can migrate to cloud to save capacity.

Use managed services whenever possible. Encryptdata on the wire and at rest.

Support multiple VPN connections between the production data center and cloud environment.

 

CEO Statement

Our investors are concerned about our ability to scale and contain costs with our current infrastructure. Theyare also concerned that a new competitor could use a public cloud platform to offset their up-front investmentand freeing them to focus on developing better features.

 

CTO Statement

We have invested heavily in the current infrastructure, but much of the equipment is approaching the end of its useful life. We are consistently waiting weeks for new gear to be racked before we can start new projects. Ourtraffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sittingidle.

 

CFO Statement

Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transition before our next hardware refresh cycle. Our total cost ofownership (TCO) analysis over the next 5 years puts a cloud strategy between 30 to 50% lower than our currentmodel.

 

For this question, refer to the Dress4Win case study. As part of Dress4Win's plans to migrate to the cloud, theywant to be able to set up a managed logging and monitoring system so they can handle spikes in their traffic load.They want to ensure that:

 

-  The infrastructure can be notified when it needs to scale up and down to handle the ebb and flow of usagethroughout the day

-  Their administrators are notified automatically when their application reports errors.

-  They can filter their aggregated logs down in order to debug one piece of the application across many hosts

Which Google StackDriver features should they use?

 

A.       Logging, Alerts, Insights, Debug

B.       Monitoring, Trace, Debug, Logging

C.      Monitoring, Logging, Alerts, Error Reporting

D.      Monitoring, Logging, Debug, Error Report

 

Correct Answer: D

Section: (none)

 

QUESTION 83

You created a pipeline that can deploy your source code changes to your infrastructure in instance groups for self-healing. One of the changes negatively affects your key performance indicator. You are not sure how to fix it, andinvestigation could take up to a week.

 

What should you do?

 

A.       Log in to a server, and iterate on the fox locally

B.       Revert the source code change, and rerun the deployment pipeline


C.      Log into the servers with the bad code change, and swap in the previous code

D.      Change the instance group template to the previous one, and delete all instances

 

Correct Answer: B

Section: (none)

 

QUESTION 84

Your organization wants to control IAM policies for different departments independently, but centrally. Whichapproach should you take?

A.       Multiple Organizations with multiple Folders

B.       Multiple Organizations, one for each department

C.      A single Organization with Folders for each department

D.      A single Organization with multiple projects, each with a central owner

 

Correct Answer: C

Section: (none)

 

QUESTION 85

You want to make a copy of a production Linux virtual machine in the US-Central region. You want to manage andreplace the copy easily if there are changes on the production virtual machine. You will deploy the copy as a newinstance in a different project in the US-East region.

What steps must you take?

 

A.       Use the Linux dd and netcat commands to copy and stream the root disk contents to a new virtualmachine instance in the US-East region.

B.       Create a snapshot of the root disk and select the snapshot as the root disk when you create a new virtualmachine instance in the US-East region.

C.      Create an image file from the root disk with Linux dd command, create a new disk from the image file, anduse it to create a new virtual machine instance in the US-East region.

D.      Create a snapshot of the root disk, create an image file in Google Cloud Storage from the snapshot, and createa new virtual machine instance in the US-East region using the image file for the root disk.

 

Correct Answer: D

Section: (none)

 

想知道更多關於小豬科技的優勢?點擊這裡 了解我們如何助力您的業務成長。


QUESTION 86

You are designing a mobile chat application. You want to ensure people cannot spoof chat messages, by providing amessage were sent by a specific user.

 

What should you do?

 

A.       Tag messages client side with the originating user identifier and the destination user.

B.       Encrypt the message client side using block-based encryption with a shared key.

C.      Use public key infrastructure (PKI) to encrypt the message client side using the originating user's private key.

D.      Use a trusted certificate authority to enable SSL connectivity between the client application and the server.

 

Correct Answer: C

Section: (none)

 

QUESTION 87

As part of implementing their disaster recovery plan, your company is trying to replicate their production MySQLdatabase from their private data center to their GCP project using a Google Cloud VPN connection. They areexperiencing latency issues and a small amount of packet loss that is disrupting the replication.

What should they do?

 

A.       Configure their replication to use UDP.


B.       Configure a Google Cloud Dedicated Interconnect.

C.      Restore their database daily using Google Cloud SQL.

D.      Add additional VPN connections and load balance them.

E.       Send the replicated transaction to Google Cloud Pub/Sub.

 

Correct Answer: B

Section: (none)

 

QUESTION 88

Case Study: 5 - Dress4win

 

Company Overview

Dress4win is a web-based company that helps their users organize and manage their personal wardrobe using awebsite and mobile application. The company also cultivates an active social network that connects their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has grown from a few servers in the founder's garage to several hundred servers and appliances in a collocated data center. However, the capacity of their infrastructure is now insufficient for the application's rapid growth. Because of this growth and the company's desire to innovate faster. Dress4Win iscommitting to a full migration to a public cloud.

 

Solution Concept

For the first phase of their migration to the cloud, Dress4win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single location. They arenot sure which components of their architecture they can migrate as is and which components they need to changebefore migrating them.

 

Existing Technical Environment

The Dress4win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.Databases:

MySQL. 1 server for user data, inventory, static data:

 

-  MySQL 5.8

-  8 core CPUs

-  128 GB of RAM

-  2x 5 TB HDD (RAID 1)

 

Redis 3 server cluster for metadata, social graph, caching. Each server is:

 

-  Redis 3.2

-  4 core CPUs

-  32GB of RAM

 

Compute:

40 Web Application servers providing micro-services based APIs and static content.

-  Tomcat - Java

-  Nginx

-  4 core CPUs

-  32 GB of RAM

 

20 Apache Hadoop/Spark servers:

 

-  Data analysis

-  Real-time trending calculations

-  8 core CPUS

-  128 GB of RAM

-  4x 5 TB HDD (RAID 1)

 

3 RabbitMQ servers for messaging, social notifications, and events:

 

-  8 core CPUs

-  32GB of RAM


Miscellaneous servers:

 

-  Jenkins, monitoring, bastion hosts, security scanners

-  8 core CPUs

-  32GB of RAM

 

Storage appliances:

iSCSI for VM hosts Fiberchannel SAN

-  1 PB total storage; 400 TB available NAS

-  100 TB total storage; 35 TB available

 

Business Requirements

Build a reliable and reproducible environment with scaled parity of production. Improvesecurity by defining and adhering to a set of security and Identity and Access

Management (IAM) best practices for cloud.

Improve business agility and speed of innovation through rapid provisioning of new resources. Analyzeand optimize architecture for performance in the cloud.

Technical Requirements

Easily create non-production environment in the cloud.

 

Implement an automation framework for provisioning resources in cloud.

Implement a continuous deployment process for deploying applications to the on-premises datacenter orcloud.

Support failover of the production environment to cloud during an emergency. Encryptdata on the wire and at rest.

Support multiple private connections between the production data center and cloud environment.

Executive Statement

Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle. Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transitionbefore our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.

 

For this question, refer to the Dress4Win case study. Dress4Win is expected to grow to 10 times its size in 1 year with a corresponding growth in data and traffic that mirrors the existing patterns of usage. The CIO has set thetarget of migrating production infrastructure to the cloud within the next 6 months. How will you configure the solution to scale for this growth without making major application changes and still maximize the ROI?

 

A.       Migrate the web application layer to App Engine, and MySQL to Cloud Datastore, and NAS to CloudStorage. Deploy RabbitMQ, and deploy Hadoop servers using Deployment Manager.

B.       Migrate RabbitMQ to Cloud Pub/Sub, Hadoop to BigQuery, and NAS to Compute Engine with Persistent Diskstorage. Deploy Tomcat, and deploy Nginx using Deployment Manager.

C.      Implement managed instance groups for Tomcat and Nginx. Migrate MySQL to Cloud SQL, RabbitMQ


to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Compute Engine with Persistent Disk storage.

D.      Implement managed instance groups for the Tomcat and Nginx. Migrate MySQL to Cloud SQL,RabbitMQ to Cloud Pub/Sub, Hadoop to Cloud Dataproc, and NAS to Cloud Storage.

 

Correct Answer: D

Section: (none)

 

QUESTION 89

Case Study: 5 - Dress4win

 

Company Overview

Dress4win is a web-based company that helps their users organize and manage their personal wardrobe using awebsite and mobile application. The company also cultivates an active social network that connects their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has few servers in the founder's garage to several hundred servers and appliances in agrown from a

collocated data center. However, the capacity of their infrastructure is now insufficient for the application's rapidgrowth. Because of this growth and the company's desire to innovate faster. Dress4Win is committing to a fullmigration to a public cloud.

 

Solution Concept

For the first phase of their migration to the cloud, Dress4win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single location. They arenot sure which components of their architecture they can migrate as is and which components they need to changebefore migrating them.

 

Existing Technical Environment

The Dress4win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.Databases:

MySQL. 1 server for user data, inventory, static data:

 

-  MySQL 5.8

-  8 core CPUs

-  128 GB of RAM

-  2x 5 TB HDD (RAID 1)

 

Redis 3 server cluster for metadata, social graph, caching. Each server is:

 

-  Redis 3.2

-  4 core CPUs

-  32GB of RAM

 

Compute:

40 Web Application servers providing micro-services based APIs and static content.

 

-  Tomcat - Java

-  Nginx

-  4 core CPUs

-  32 GB of RAM

 

20 Apache Hadoop/Spark servers:

 

-  Data analysis

-  Real-time trending calculations

-  8 core CPUS

-  128 GB of RAM

-  4x 5 TB HDD (RAID 1)

 

3 RabbitMQ servers for messaging, social notifications, and events:

 

-  8 core CPUs


-  32GB of RAM

 

Miscellaneous servers:

 

-  Jenkins, monitoring, bastion hosts, security scanners

-  8 core CPUs

-  32GB of RAM

 

Storage appliances:

iSCSI for VM hosts Fiberchannel SAN

-  1 PB total storage; 400 TB available NAS

-  100 TB total storage; 35 TB available

 

Business Requirements

Build a reliable and reproducible environment with scaled parity of production. Improvesecurity by defining and adhering to a set of security and Identity and Access

Management (IAM) best practices for cloud.

Improve business agility and speed of innovation through rapid provisioning of new resources. Analyzeand optimize architecture for performance in the cloud.

Technical Requirements

Easily create non-production environment in the cloud.

 

Implement an automation framework for provisioning resources in cloud.

Implement a continuous deployment process for deploying applications to the on-premises datacenter orcloud.

Support failover of the production environment to cloud during an emergency. Encryptdata on the wire and at rest.

Support multiple private connections between the production data center and cloud environment.

 

Executive Statement

Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle. Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transitionbefore our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.

 

For this question, refer to the Dress4Win case study. Considering the given business requirements, how would youautomate the deployment of web and transactional data layers?

 

A.       Deploy Nginx and Tomcat using Cloud Deployment Manager to Compute Engine. Deploy a Cloud SQL serverto replace MySQL. Deploy Jenkins using Cloud Deployment Manager.

B.       Deploy Nginx and Tomcat using Cloud Launcher. Deploy a MySQL server using Cloud Launcher. DeployJenkins to Compute Engine using Cloud Deployment Manager scripts.

C.      Migrate Nginx and Tomcat to App Engine. Deploy a Cloud Datastore server to replace the MySQL serverin a high-availability configuration. Deploy Jenkins to Compute Engine using Cloud Launcher.

D.      Migrate Nginx and Tomcat to App Engine. Deploy a MySQL server using Cloud Launcher. Deploy Jenkinsto Compute Engine using Cloud Launcher.


Correct Answer: C

Section: (none)

 

QUESTION 90

Case Study: 5 - Dress4win

 

Company Overview

Dress4win is a web-based company that helps their users organize and manage their personal wardrobe using awebsite and mobile application. The company also cultivates an active social network that connects their users with designers and retailers. They monetize their services through advertising, e-commerce, referrals, and a freemium app model. The application has founder's garage to several hundred servers and appliances in a grown from a fewservers in the

collocated data center. However, the capacity of their infrastructure is now insufficient for the application's rapidgrowth. Because of this growth and the company's desire to innovate faster. Dress4Win is committing to a fullmigration to a public cloud.

 

Solution Concept

For the first phase of their migration to the cloud, Dress4win is moving their development and test environments. They are also building a disaster recovery site, because their current infrastructure is at a single location. They arenot sure which components of their architecture they can migrate as is and which components they need to changebefore migrating them.

 

Existing Technical Environment

The Dress4win application is served out of a single data center location. All servers run Ubuntu LTS v16.04.Databases:

MySQL. 1 server for user data, inventory, static data:

 

-  MySQL 5.8

-  8 core CPUs

-  128 GB of RAM

-  2x 5 TB HDD (RAID 1)

 

Redis 3 server cluster for metadata, social graph, caching. Each server is:

-  Redis 3.2

-  4 core CPUs

-  32GB of RAM

 

Compute:

40 Web Application servers providing micro-services based APIs and static content.

 

-  Tomcat - Java

-  Nginx

-  4 core CPUs

-  32 GB of RAM

 

20 Apache Hadoop/Spark servers:

 

-  Data analysis

-  Real-time trending calculations

-  8 core CPUS

-  128 GB of RAM

-  4x 5 TB HDD (RAID 1)

 

3 RabbitMQ servers for messaging, social notifications, and events:

 

-  8 core CPUs

-  32GB of RAM

 

Miscellaneous servers:

 

-  Jenkins, monitoring, bastion hosts, security scanners

-  8 core CPUs


-  32GB of RAM

 

Storage appliances:

iSCSI for VM hosts Fiberchannel SAN

-  1 PB total storage; 400 TB available NAS

-  100 TB total storage; 35 TB available

 

Business Requirements

Build a reliable and reproducible environment with scaled parity of production. Improvesecurity by defining and adhering to a set of security and Identity and Access

Management (IAM) best practices for cloud.

Improve business agility and speed of innovation through rapid provisioning of new resources. Analyzeand optimize architecture for performance in the cloud.

Technical Requirements

Easily create non-production environment in the cloud.

 

Implement an automation framework for provisioning resources in cloud.

Implement a continuous deployment process for deploying applications to the on-premises datacenter orcloud.

Support failover of the production environment to cloud during an emergency.

 

Encrypt data on the wire and at rest.

Support multiple private connections between the production data center and cloud environment.

Executive Statement

Our investors are concerned about our ability to scale and contain costs with our current infrastructure. They are also concerned that a competitor could use a public cloud platform to offset their up-front investment and free them to focus on developing better features. Our traffic patterns are highest in the mornings and weekend evenings; during other times, 80% of our capacity is sitting idle. Our capital expenditure is now exceeding our quarterly projections. Migrating to the cloud will likely cause an initial increase in spending, but we expect to fully transitionbefore our next hardware refresh cycle. Our total cost of ownership (TCO) analysis over the next 5 years for a public cloud strategy achieves a cost reduction between 30% and 50% over our current model.

 

For this question, refer to the Dress4Win case study. Which of the compute services should be migrated as

-is and would still be an optimized architecture for performance in the cloud?

 

A.       Web applications deployed using App Engine standard environment

B.       RabbitMQ deployed using an unmanaged instance group

C.      Hadoop/Spark deployed using Cloud Dataproc Regional in High Availability mode

D.      Jenkins, monitoring, bastion hosts, security scanners services deployed on custom machine types

 

Correct Answer: D

Section: (none)

 

立即註冊小豬科技,點擊這裡 開始您的雲端之旅!


QUESTION 91

Case Study: 6 - TerramEarth

 

Company Overview

TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their


business is from mining and 20% from agriculture. They currently have over 500 dealers and service centers in 100countries. Their mission is to build products that make their customers more productive.

 

Solution Concept

There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is downloaded via amaintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to beupgraded in the field with new computing modules.

 

Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly.At a rate of 120 fields of data per second with 22 hours of operation per day, TerramEarth collects a total of about 9TB/day from these connected vehicles.

 

Existing Technical Environment

TerramEarth's existing architecture is composed of Linux and Windows-based systems that reside in a single U.S. west coast based data center. These systems gzip CSV files from the field and upload via FTP, and place the datain their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.

 

With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their vehicles for up to 4weeks while they wait for replacement parts.

 

Business Requirements

Decrease unplanned vehicle downtime to less than 1 week.

Support the dealer network with more data on how their customers use their equipment to better positionnew products and services

Have the ability to partner with different companies

 

in the fast-growing agricultural businesscustomers.

 

Technical Requirements

Expand beyond a single datacenter to decrease latency to the American Midwest and east

 

coast.

Create a backup strategy.

 

Increase security of data transfer from equipment to the datacenter. Improvedata in the data warehouse.

Use customer and equipment data to anticipate customer needs.

 

Application 1: Data ingest

A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse. Compute:

Windows Server 2008 R2

 

-  16 CPUs

-  128 GB of RAM

-  10 TB local HDD storage

 

Application 2: Reporting

An off the shelf application that business analysts use to run a daily report to see what equipment needs repair.Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a time.

Compute:

Off the shelf application. License tied to number of physical CPUs

 

-  Windows Server 2008 R2

-  16 CPUs

-  32 GB of RAM


-  500 GB HDD

Data warehouse:

A single PostgreSQL server

 

-  RedHat Linux

-  64 CPUs

-  128 GB of RAM

-  4x 6TB HDD in RAID 0

 

Executive Statement

Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different constantly being developed, and I'm concerned that we lack the skills to undergo approaches are the next wave of transformations in our industry. Mygoals are to build our skills while addressing immediate market needs through incremental innovations.

 

For this question, refer to the TerramEarth case study. To be compliant with European GDPR regulation,TerramEarth is required to delete data generated from its European customers after a period of 36 months when it contains personal data. In the new architecture, this data will be stored in both Cloud Storage and BigQuery. Whatshould you do?

 

A.       Create a BigQuery table for the European data, and set the table retention period to 36 months. ForCloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Age condition of36 months.

B.       Create a BigQuery table for the European data, and set the table retention period to 36 months.

For Cloud Storage, use gsutil to create a SetStorageClass to NONE action when with an Age condition of 36months.

C.      Create a BigQuery time-partitioned table for the European data, and set the partition expiration period to 36 months. For Cloud Storage, use gsutil to enable lifecycle management using a DELETE action with an Agecondition of 36 months.

D.      Create a BigQuery time-partitioned table for the European data, and set the partition period to 36months. For Cloud Storage, use gsutil to create a SetStorageClass to NONE action with an Agecondition of 36 months.

 

Correct Answer: C

Section: (none)

 

QUESTION 92

Case Study: 6 - TerramEarth

 

Company Overview

TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service centers in 100countries. Their mission is to build products that make their customers more productive.

 

Solution Concept

There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is downloaded via amaintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to beupgraded in the field with new computing modules.

 

Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly.At a rate of 120 fields of data per second with 22 hours of operation per day, TerramEarth collects a total of about 9TB/day from these connected vehicles.

 

Existing Technical Environment

TerramEarth's existing architecture is composed of Linux and Windows-based systems that reside in a single U.S. west coast based data center. These systems gzip CSV files from the field and upload via FTP, and place the datain their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.

 

With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their vehicles for up to 4weeks while they wait for replacement parts.


Business Requirements

Decrease unplanned vehicle downtime to less than 1 week.

 

Support the dealer network with more data on how their customers use their equipment to better positionnew products and services

Have the ability to partner with different companies

 

in the fast-growing agricultural businesscustomers.

 

Technical Requirements

Expand beyond a single datacenter to decrease latency to the American Midwest and east

 

coast.

Create a backup strategy.

Increase security of data transfer from equipment to the datacenter. Improvedata in the data warehouse.

Use customer and equipment data to anticipate customer needs.

 

Application 1: Data ingest

A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse. Compute:

Windows Server 2008 R2

 

-  16 CPUs

-  128 GB of RAM

-  10 TB local HDD storage

 

Application 2: Reporting

An off the shelf application that business analysts use to run a daily report to see what equipment needs repair.Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a time.

Compute:

Off the shelf application. License tied to number of physical CPUs

 

-  Windows Server 2008 R2

-  16 CPUs

-  32 GB of RAM

-  500 GB HDD

Data warehouse:

A single PostgreSQL server

 

-  RedHat Linux

-  64 CPUs

-  128 GB of RAM

-  4x 6TB HDD in RAID 0

 

Executive Statement

Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly being developed, and I'm concerned that we lack the skills to undergo the next wave of transformations in our industry.My goals are to build our skills while addressing immediate market needs through incremental innovations.

 

For this question, refer to the TerramEarth case study. TerramEarth has decided to store data files in Cloud Storage.You need to configure Cloud Storage lifecycle rule to store 1 year of data and minimize file storage cost.

Which two actions should you take?

 

A.       Create a Cloud Storage lifecycle rule with Age: "30", Storage Class: "Standard", and Action: "Set to


Coldline", and create a second GCS life-cycle rule with Age: "365", Storage Class: "Coldline", and Action:"Delete".

B.       Create a Cloud Storage lifecycle rule with Age: "30", Storage Class: "Coldline", and Action: "Set to Nearline",and create a second GCS life-cycle rule with Age: "91", Storage Class: "Coldline", and Action: "Set to Nearline".

C.      Create a Cloud Storage lifecycle rule with Age: "90", Storage Class: "Standard", and Action: "Set to Nearline",and create a second GCS life-cycle rule with Age: "91", Storage Class: "Nearline", and Action: "Set to Coldline".

D.      Create a Cloud Storage lifecycle rule with Age: "30", Storage Class: "Standard", and Action: "Set toColdline", and create a second GCS life-cycle rule with Age: "365", Storage Class: "Nearline", and Action:"Delete".

 

Correct Answer: A

Section: (none)

 

QUESTION 93

Case Study: 6 - TerramEarth

 

Company Overview

TerramEarth manufactures heavy equipment for the mining and agricultural industries. About 80% of their business is from mining and 20% from agriculture. They currently have over 500 dealers and service centers in 100countries. Their mission is to build products that make their customers more productive.

 

Solution Concept

There are 20 million TerramEarth vehicles in operation that collect 120 fields of data per second. Data is stored locally on the vehicle and can be accessed for analysis when a vehicle is serviced. The data is downloaded via amaintenance port. This same port can be used to adjust operational parameters, allowing the vehicles to be upgraded in the field with new computing modules. Approximately 200,000 vehicles are connected to a cellular network, allowing TerramEarth to collect data directly. At a rate of 120 fields of data per second with 22 hours of operation per day, TerramEarth collects a total of about 9 TB/day from these connected vehicles.

 

Existing Technical Environment

TerramEarth's existing architecture is composed of Linux and Windows-based systems that reside in a single U.S. west coast based data center. These systems gzip CSV files from the field and upload via FTP, and place the datain their data warehouse. Because this process takes time, aggregated reports are based on data that is 3 weeks old.

 

With this data, TerramEarth has been able to preemptively stock replacement parts and reduce unplanned downtime of their vehicles by 60%. However, because the data is stale, some customers are without their vehicles for up to 4weeks while they wait for replacement parts.

 

Business Requirements

Decrease unplanned vehicle downtime to less than 1 week.

 

Support the dealer network with more data on how their customers use their equipment to better positionnew products and services

Have the ability to partner with different companies

 

in the fast-growing agricultural businesscustomers.

 

Technical Requirements

Expand beyond a single datacenter to decrease latency to the American Midwest and east

 

coast.

Create a backup strategy.

 

Increase security of data transfer from equipment to the datacenter. Improvedata in the data warehouse.


Use customer and equipment data to anticipate customer needs.

 

Application 1: Data ingest

A custom Python application reads uploaded datafiles from a single server, writes to the data warehouse. Compute:

Windows Server 2008 R2

 

-  16 CPUs

-  128 GB of RAM

-  10 TB local HDD storage

 

Application 2: Reporting

An off the shelf application that business analysts use to run a daily report to see what equipment needs repair.Only 2 analysts of a team of 10 (5 west coast, 5 east coast) can connect to the reporting application at a time.

Compute:

Off the shelf application. License tied to number of physical CPUs

-  Windows Server 2008 R2

-  16 CPUs

-  32 GB of RAM

-  500 GB HDD

Data warehouse:

A single PostgreSQL server

 

-  RedHat Linux

-  64 CPUs

-  128 GB of RAM

-  4x 6TB HDD in RAID 0

 

Executive Statement

Our competitive advantage has always been in the manufacturing process, with our ability to build better vehicles for lower cost than our competitors. However, new products with different approaches are constantly being developed, and I'm concerned that we lack the skills to undergo the next wave of transformations in our industry.My goals are to build our skills while addressing immediate market needs through incremental innovations.

 

For this question, refer to the TerramEarth case study. You need to implement a reliable, scalable GCP solutionfor the data warehouse for your company, TerramEarth. Considering the TerramEarth business and technicalrequirements, what should you do?

 

A.       Replace the existing data warehouse with BigQuery. Use table partitioning.

B.       Replace the existing data warehouse with a Compute Engine instance with 96 CPUs.

C.      Replace the existing data warehouse with BigQuery. Use federated data sources.

D.      Replace the existing data warehouse with a Compute Engine instance with 96 CPUs. Add an additionalCompute Engine pre-emptible instance with 32 CPUs.

 

Correct Answer: A

Section: (none)

 

QUESTION 94

Case Study: 7 - Mountkirk Games

 

Company Overview

Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.

Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools. Their current model is to write game statistics to filesand send them through an ETL tool that loads them into a centralized MySQL database for reporting.

 

Solution Concept

Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game'sbackend on Google Compute Engine so they can capture streaming metrics, run intensive


analytics, and take advantage of its autoscaling server environment and integrate with a managed NoSQL database.

 

Business Requirements Increase toa global footprint.

 

Improve uptime

Increase efficiency of the cloud resources we use. Reducelatency to all customers.

Technical Requirements

Requirements for Game Backend Platform Dynamically scaleup or down based on game activity.

 

Connect to a transactional database service to manage user profiles and game state. Storegame activity in a timeseries database service for future analysis.

As the system scales, ensure that data is not lost due to processing backlogs. Runhardened Linux distro.

Requirements for Game Analytics Platform Dynamically scaleup or down based on game activity

 

Process incoming data on the fly directly from the game servers Process data that arrives late because of slow mobile networks Allow queries toaccess at least 10 TB of historical data

Process files that are regularly uploaded by users' mobile devices

 

Executive Statement

Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption andaffecting the game's reputation. Our investors want more key performance indicators (KPIs) to evaluate the speedand stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees usup from managing physical servers.

 

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to migrate from their current analytics and statistics reporting model to one that meets their technical requirements on Google Cloud Platform.

Which two steps should be part of their migration plan? (Choose two.)

 

A.       Evaluate the impact of migrating their current batch ETL code to Cloud Dataflow.

B.       Write a schema migration plan to denormalize data for better performance in BigQuery.

C.      Draw an architecture diagram that shows how to move from a single MySQL database to a MySQL cluster.

D.      Load 10 TB of analytics data from a previous game into a Cloud SQL instance, and run test queries againstthe full dataset to confirm that they complete successfully.

E.       Integrate Cloud Armor to defend against possible SQL injection attacks in analytics files uploaded to CloudStorage.

 

Correct Answer: AB

Section: (none)

 

QUESTION 95

Case Study: 7 - Mountkirk GamesCompany Overview


Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.

Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools. Their current model is to write game statistics to filesand send them through an ETL tool that loads them into a centralized MySQL database for reporting.

 

Solution Concept

Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game's backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics, and takeadvantage of its autoscaling server environment and integrate with a managed NoSQL database.

 

Business Requirements Increase toa global footprint.

 

Improve uptime

 

Increase efficiency of the cloud resources we use. Reducelatency to all customers.

Technical Requirements

Requirements for Game Backend Platform Dynamically scaleup or down based on game activity.

 

Connect to a transactional database service to manage user profiles and game state. Storegame activity in a timeseries database service for future analysis.

As the system scales, ensure that data is not lost due to processing backlogs. Runhardened Linux distro.

Requirements for Game Analytics Platform Dynamically scaleup or down based on game activity

 

Process incoming data on the fly directly from the game servers Process data that arrives late because of slow mobile networks Allow queries toaccess at least 10 TB of historical data

ess files that are regularly uploaded by users' mobile devices Proc

 

Executive Statement

Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption andaffecting the game's reputation. Our investors want more key performance indicators (KPIs) to evaluate the speedand stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees usup from managing physical servers.

 

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technicalarchitecture for the compute workloads for your company, Mountkirk Games. Considering the Mountkirk Games business and technical requirements, what should you do?

 

A.       Create network load balancers. Use preemptible Compute Engine instances.

B.       Create network load balancers. Use non-preemptible Compute Engine instances.

C.      Create a global load balancer with managed instance groups and autoscaling policies. Use preemptibleCompute Engine instances.

D.      Create a global load balancer with managed instance groups and autoscaling policies. Use non-preemptible Compute Engine instances.


Correct Answer: D

Section: (none)

 

我們的專員隨時待命,點擊這裡 聯絡小豬科技,解決您的問題。


QUESTION 96

Case Study: 7 - Mountkirk GamesCompany Overview

Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.

Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools. Their current model is to write game statistics to filesand send them through an ETL tool that loads them into a centralized MySQL database for reporting.

 

Solution Concept

Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game's backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics, and takeadvantage of its autoscaling server environment and integrate with a managed NoSQL database.

 

Business Requirements Increase toa global footprint.

 

Improve uptime

 

Increase efficiency of the cloud resources we use. Reducelatency to all customers.

Technical Requirements

Requirements for Game Backend Platform Dynamically scaleup or down based on game activity.

 

Connect to a transactional database service to manage user profiles and game state. Storegame activity in a timeseries database service for future analysis.

As the system scales, ensure that data is not lost due to processing backlogs. Runhardened Linux distro.

Requirements for Game Analytics Platform Dynamically scaleup or down based on game activity

 

Process incoming data on the fly directly from the game servers Process data that arrives late because of slow mobile networks Allow queries toaccess at least 10 TB of historical data

Process files that are regularly uploaded by users' mobile devices

 

Executive Statement

Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption andaffecting the game's reputation. Our investors want more key performance indicators (KPIs) to evaluate the speedand stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees usup from managing physical servers.

 

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants to design their solution for the future in order to take advantage of cloud and technology improvements as they become available. Which twosteps should they take? (Choose two.)


A.       Store as much analytics and game activity data as financially feasible today so it can be used to trainmachine learning models to predict user behavior in the future.

B.       Begin packaging their game backend artifacts in container images and running them on Kubernetes Engineto improve the availability to scale up or down based on game activity.

C.      Set up a CI/CD pipeline using Jenkins and Spinnaker to automate canary deployments and improvedevelopment velocity.

D.      Adopt a schema versioning tool to reduce downtime when adding new game features that require storingadditional player data in the database.

E.       Implement a weekly rolling maintenance process for the Linux virtual machines so they can apply critical kernelpatches and package updates and reduce the risk of 0-day vulnerabilities.

 

Correct Answer: CE

Section: (none)

 

QUESTION 97

Case Study: 7 - Mountkirk Games

 

Company Overview

Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.

Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools. Their current model is to write game statistics to filesand send them through an ETL tool that loads them into a centralized MySQL database for reporting.

 

Solution Concept

Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game's backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics, and takeadvantage of its autoscaling server environment and integrate with a managed NoSQL database.

 

Business Requirements Increase toa global footprint.

 

Improve uptime

 

Increase efficiency of the cloud resources we use. Reducelatency to all customers.

Technical Requirements

Requirements for Game Backend Platform Dynamically scaleup or down based on game activity.

 

Connect to a transactional database service to manage user profiles and game state. Storegame activity in a timeseries database service for future analysis.

As the system scales, ensure that data is not lost due to processing backlogs. Runhardened Linux distro.

Requirements for Game Analytics Platform Dynamically scaleup or down based on game activity

 

Process incoming data on the fly directly from the game servers Process data that arrives late because of slowmobile networks

 

Allow queries to access at least 10 TB of historical data

 

Process files that are regularly uploaded by users' mobile devices


Executive Statement

Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption andaffecting the game's reputation. Our investors want more key performance indicators (KPIs) to evaluate the speedand stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees usup from managing physical servers.

 

For this question, refer to the Mountkirk Games case study. Mountkirk Games wants you to design a way to test theanalytics platform's resilience to changes in mobile network latency.

What should you do?

 

A.       Deploy failure injection software to the game analytics platform that can inject additional latency to mobileclient analytics traffic.

B.       Build a test client that can be run from a mobile phone emulator on a Compute Engine virtual machine, andrun multiple copies in Google Cloud Platform regions all over the world to generate realistic traffic.

C.      Add the ability to introduce a random amount of delay before beginning to process analytics filesuploaded from mobile devices.

D.      Create an opt-in beta of the game that runs on players' mobile devices and collects response times fromanalytics endpoints running in Google Cloud Platform regions all over the world.

 

Correct Answer: D

Section: (none)

 

QUESTION 98

Case Study: 7 - Mountkirk Games

 

Company Overview

Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.

Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools. Their current model is to write game statistics to filesand send them through an ETL tool that loads them into a centralized MySQL database for reporting.

 

Solution Concept

Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game's backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics, and takeadvantage of its autoscaling server environment and integrate with a managed NoSQL database.

 

Business Requirements Increase toa global footprint.

 

Improve uptime

 

Increase efficiency of the cloud resources we use. Reducelatency to all customers.

Technical Requirements

Requirements for Game Backend Platform Dynamically scaleup or down based on game activity.

 

Connect to a transactional database service to manage user profiles and game state. Storegame activity in a timeseries database service for future analysis.

As the system scales, ensure that data is not lost due to processing backlogs. Runhardened Linux distro.

Requirements for Game Analytics Platform


Dynamically scale up or down based on game activity

 

Process incoming data on the fly directly from the game servers Process data that arrives late because of slow mobile networks Allow queries toaccess at least 10 TB of historical data

Process files that are regularly uploaded by users' mobile devices

 

Executive Statement

Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption andaffecting the game's reputation. Our investors want more key performance indicators (KPIs) to evaluate the speedand stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees usup from managing physical servers.

 

For this question, refer to the Mountkirk Games case study. You need to analyze and define the technicalarchitecture for the database workloads for your company, Mountkirk Games. Considering the business and technicalrequirements, what should you do?

 

A.       Use Cloud SQL for time series data, and use Cloud Bigtable for historical data queries.

B.       Use Cloud SQL to replace MySQL, and use Cloud Spanner for historical data queries.

C.      Use Cloud Bigtable to replace MySQL, and use BigQuery for historical data queries.

D.      Use Cloud Bigtable for time series data, use Cloud Spanner for transactional data, and use BigQuery forhistorical data queries.

 

Correct Answer: D

Section: (none)

 

QUESTION 99

Case Study: 7 - Mountkirk Games

 

Company Overview

Mountkirk Games makes online, session-based, multiplayer games for mobile platforms. They build all of their games using some server-side integration. Historically, they have used cloud providers to lease physical servers.

Due to the unexpected popularity of some of their games, they have had problems scaling their global audience, application servers, MySQL databases, and analytics tools. Their current model is to write game statistics to filesand send them through an ETL tool that loads them into a centralized MySQL database for reporting.

 

Solution Concept

Mountkirk Games is building a new game, which they expect to be very popular. They plan to deploy the game's backend on Google Compute Engine so they can capture streaming metrics, run intensive analytics, and takeadvantage of its autoscaling server environment and integrate with a managed NoSQL database.

Business Requirements Increase toa global footprint.

 

Improve uptime

 

Increase efficiency of the cloud resources we use. Reducelatency to all customers.

Technical Requirements

Requirements for Game Backend Platform Dynamically scaleup or down based on game activity.

 

Connect to a transactional database service to manage user profiles and game state. Storegame activity in a timeseries database service for future analysis.


As the system scales, ensure that data is not lost due to processing backlogs. Runhardened Linux distro.

Requirements for Game Analytics Platform Dynamically scaleup or down based on game activity

 

Process incoming data on the fly directly from the game servers Process data that arrives late because of slow mobile networks Allow queries toaccess at least 10 TB of historical data

Process files that are regularly uploaded by users' mobile devices

 

Executive Statement

Our last successful game did not scale well with our previous cloud provider, resulting in lower user adoption andaffecting the game's reputation. Our investors want more key performance indicators (KPIs) to evaluate the speedand stability of the game, as well as other metrics that provide deeper insight into usage patterns so we can adapt the game to target users. Additionally, our current technology stack cannot provide the scale we need, so we want to replace MySQL and move to an environment that provides autoscaling, low latency load balancing, and frees usup from managing physical servers.

 

For this question, refer to the Mountkirk Games case study. Which managed storage option meets Mountkirk'stechnical requirement for storing game activity in a time series database service?

 

A.       Cloud Bigtable

B.       Cloud Spanner

C.      BigQuery

D.      Cloud Datastore

 

Correct Answer: A

Section: (none)

 

QUESTION 100

Your customer support tool logs all email and chat conversations to Cloud Bigtable for retention and analysis. Whatis the recommended approach for sanitizing this data of personally identifiable information or payment cardinformation before initial storage?

 

A.       Hash all data using SHA256

B.       Encrypt all data using elliptic curve cryptography

C.      De-identify the data with the Cloud Data Loss Prevention API

D.      Use regular expressions to find and redact phone numbers, email addresses, and credit card numbers

 

Correct Answer: C

Section: (none)

 

想要提升您的雲端運營效率?點擊這裡 了解小豬科技的專業方案。


QUESTION 101

You are using Cloud Shell and need to install a custom utility for use in a few weeks. Where can you store the file soit is in the default execution path and persists across sessions?

 

A.       ~/bin

B.       Cloud Storage

C.      /google/scripts

D.      /usr/local/bin

 

Correct Answer: A

Section: (none)QUESTION 102


You want to create a private connection between your instances on Compute Engine and your on- premises data center. You require a connection of at least 20 Gbps. You want to follow Google- recommended practices. Howshould you set up the connection?

 

A.       Create a VPC and connect it to your on-premises data center using Dedicated Interconnect.

B.       Create a VPC and connect it to your on-premises data center using a single Cloud VPN.

C.      Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises data center usingDedicated Interconnect.

D.      Create a Cloud Content Delivery Network (Cloud CDN) and connect it to your on-premises datacenter using asingle Cloud VPN.

 

Correct Answer: A

Section: (none)

 

QUESTION 103

You are analyzing and defining business processes to support your startup's trial usage of GCP, and you don't yet know what consumer demand for your product will be. Your manager requires you to minimize GCP service costs and adhere to Google best practices. What should you do?

 

A.       Utilize free tier and sustained use discounts. Provision a staff position for service cost management.

B.       Utilize free tier and sustained use discounts. Provide training to the team about service costmanagement.

C.      Utilize free tier and committed use discounts. Provision a staff position for service cost management.

D.      Utilize free tier and committed use discounts. Provide training to the team about service costmanagement.

 

Correct Answer: B

Section: (none)

 

QUESTION 104

You are building a continuous deployment pipeline for a project stored in a Git source repository and want to ensurethat code changes can be verified deploying to production. What should you do?

 

A.       Use Spinnaker to deploy builds to production using the red/black deployment strategy so that changes caneasily be rolled back.

B.       Use Spinnaker to deploy builds to production and run tests on production deployments.

C.      Use Jenkins to build the staging branches and the master branch. Build and deploy changes toproduction for 10% of users before doing a complete rollout.

D.      Use Jenkins to monitor tags in the repository. Deploy staging tags to a staging environment for testing. Aftertesting, tag the repository for production and deploy that to the production environment.

 

Correct Answer: D

Section: (none)

 

QUESTION 105

You have an outage in your Compute Engine managed instance group: all instance keep restarting after 5 seconds. You have a health check configured, but autoscaling is disabled. Your colleague, who is a Linux expert, offered tolook into the issue. You need to make sure that he can access the VMs. What should you do?

 

A.       Grant your colleague the IAM role of project Viewer

B.       Perform a rolling restart on the instance group

C.      Disable the health check for the instance group. Add his SSH key to the project-wide SSH keys

D.      Disable autoscaling for the instance group. Add his SSH key to the project-wide SSH Keys

 

Correct Answer: C

Section: (none)

 

想要提升業務效率?立即註冊 小豬科技,體驗最優質的雲端服務。


QUESTION 106

Your company is migrating its on-premises data center into the cloud. As part of the migration, you want to


integrate Kubernetes Engine for workload orchestration. Parts of your architecture must also be PCI DSS- compliant.Which of the following is most accurate?

 

A.       App Engine is the only compute platform on GCP that is certified for PCI DSS hosting.

B.       Kubernetes Engine cannot be used under PCI DSS because it is considered shared hosting.

C.      Kubernetes Engine and GCP provide the tools you need to build a PCI DSS-compliant environment.

D.      All Google Cloud services are usable because Google Cloud Platform is certified PCI-compliant.

 

Correct Answer: C

Section: (none)

 

QUESTION 107

Your company has multiple on-premises systems that serve as sources for reporting. The data has not been maintained well and has become degraded over time. You want to use Google- recommended practices todetect anomalies in your company data. What should you do?

 

A.       Upload your files into Cloud Storage. Use Cloud Datalab to explore and clean your data.

B.       Upload your files into Cloud Storage. Use Cloud Dataprep to explore and clean your data.

C.      Connect Cloud Datalab to your on-premises systems. Use Cloud Datalab to explore and clean your data.

D.      Connect Cloud Dataprep to your on-premises systems. Use Cloud Dataprep to explore and clean your data.

 

Correct Answer: B

Section: (none)

 

QUESTION 108

Google Cloud Platform resources are managed hierarchically using organization, folders, and projects. WhenCloud Identity and Access Management (IAM) policies exist at these different levels, what is the effective policyat a particular node of the hierarchy?

 

A.       The effective policy is determined only by the policy set at the node

B.       The effective policy is the policy set at the node and restricted by the policies of its ancestors

C.      The effective policy is the union of the policy set at the node and policies inherited from its ancestors

D.      The effective policy is the intersection of the policy set at the node and policies inherited from itsancestors

 

Correct Answer: C

Section: (none)

 

QUESTION 109

You are migrating your on-premises solution to Google Cloud in several phases. You will use Cloud VPN to maintain a connection between your on-premises systems and Google Cloud until the migration is completed. You want tomake sure all your on-premises systems remain reachable during this period. How should you organize yournetworking in Google Cloud?

 

A.       Use the same IP range on Google Cloud as you use on-premises

B.       Use the same IP range on Google Cloud as you use on-premises for your primary IP range and use asecondary range that does not overlap with the range you use on-premises

C.      Use an IP range on Google Cloud that does not overlap with the range you use on-premises

D.      Use an IP range on Google Cloud that does not overlap with the range you use on-premises for your primaryIP range and use a secondary range with the same IP range as you use on- premises

 

Correct Answer: D

Section: (none)

 

QUESTION 110

You have found an error in your App Engine application caused by missing Cloud Datastore indexes. You have created a YAML file with the required indexes and want to deploy these new indexes to Cloud Datastore. Whatshould you do?


A.       Point gcloud datastore create-indexes to your configuration file

B.       Upload the configuration file the App Engine's default Cloud Storage bucket, and have App Engine detectthe new indexes

C.      In the GCP Console, use Datastore Admin to delete the current indexes and upload the newconfiguration file

D.      Create an HTTP request to the built-in python module to send the index configuration file to yourapplication

 

Correct Answer: A

Section: (none)

 

聯絡小豬科技,點擊這裡 讓我們為您的業務提供支持!


QUESTION 111

You have an application that will run on Compute Engine. You need to design an architecture that takes into account a disaster recovery plan that requires your application to fail over to another region in case of a regional outage.What should you do?

 

A.       Deploy the application on two Compute Engine instances in the same project but in a different region. Use the first instance to serve traffic, and use the HTTP load balancing service to fail over to the standby instancein case of a disaster.

B.       Deploy the application on a Compute Engine instance. Use the instance to serve traffic, and use the HTTPload balancing service to fail over to an instance on your premises in case of a disaster.

C.      Deploy the application on two Compute Engine instance groups, each in the same project but in a differentregion. Use the first instance group to serve traffic, and use the HTTP load balancing service to fail over to thestandby instance group in case of a disaster.

D.      Deploy the application on two Compute Engine instance groups, each in separate project and a different region. Use the first instance group to server traffic, and use the HTTP load balancing service to fail over to the standbyinstance in case of a disaster.

 

Correct Answer: C

Section: (none)

 

QUESTION 112

You are deploying an application on App Engine that needs to integrate with an on-premises database. For securitypurposes, your on-premises database must not be accessible through the public Internet.

What should you do?

 

A.       Deploy your application on App Engine standard environment and use App Engine firewall rules to limit accessto the open on-premises database.

B.       Deploy your application on App Engine standard environment and use Cloud VPN to limit access to the on-premises database.

C.      Deploy your application on App Engine flexible environment and use App Engine firewall rules to limit accessto the on-premises database.

D.      Deploy your application on App Engine flexible environment and use Cloud VPN to limit access to the on-premises database.

 

Correct Answer: D

Section: (none)

 

QUESTION 113

You are working in a highly secured environment where public Internet access from the Compute Engine VMs isnot allowed. You do not yet have a VPN connection to access an on-premises file server. You need to install specificsoftware on a Compute Engine instance. How should you install the software?

 

A.       Upload the required installation files to Cloud Storage. Configure the VM on a subnet with a Private GoogleAccess subnet. Assign only an internal IP address to the VM. Download the installation files to the VM usinggsutil.

B.       Upload the required installation files to Cloud Storage and use firewall rules to block all traffic except the IPaddress range for Cloud Storage. Download the files to the VM using gsutil.

C.      Upload the required installation files to Cloud Source Repositories. Configure the VM on a subnet with a


Private Google Access subnet. Assign only an internal IP address to the VM. Download the installation files to theVM using gcloud.

D.      Upload the required installation files to Cloud Source Repositories and use firewall rules to block all trafficexcept the IP address range for Cloud Source Repositories. Download the files to the VM using gsutil.

 

Correct Answer: A

Section: (none)

 

QUESTION 114

Your company is moving 75 TB of data into Google Cloud. You want to use Cloud Storage and follow Google-recommended practices. What should you do?

 

A.       Move your data onto a Transfer Appliance. Use a Transfer Appliance Rehydrator to decrypt the data into CloudStorage.

B.       Move your data onto a Transfer Appliance. Use Cloud Dataprep to decrypt the data into Cloud Storage.

C.      Install gsutil on each server that contains data. Use resumable transfers to upload the data into CloudStorage.

D.      Install gsutil on each server containing data. Use streaming transfers to upload the data into CloudStorage.

 

Correct Answer: A

Section: (none)

 

QUESTION 115

You have an application deployed on Kubernetes Engine using a Deployment named echo- deployment. The deployment is exposed using a Service called echo-service. You need to perform an update to the applicationwith minimal downtime to the application. What should you do?

 

A.       Use kubectl set image deployment/echo-deployment <new-image>

B.       Use the rolling update functionality of the Instance Group behind the Kubernetes cluster

C.      Update the deployment yaml file with the new container image. Use kubectl delete deployment/echo-deployment and kubectl create -f <yaml-file>

D.      Update the service yaml file which the new container image. Use kubectl delete service/ echo- service andkubectl create -f <yaml-file>

 

Correct Answer: A

Section: (none)

 

即刻註冊小豬科技,點擊這裡 體驗雲端服務的高效。


QUESTION 116

Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloudprojects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them.

How should you configure users' access roles?

 

A.       Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuerydataViewer on the projects that contain the data.

B.       Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project andBigQuery user on the projects that contain the data.

C.      Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project andBigQuery dataViewer on the projects that contain the data.

D.      Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project andBigQuery jobUser on the projects that contain the data.

 

Correct Answer: C

Section: (none)

 

QUESTION 117

You have developed an application using Cloud ML Engine that recognizes famous paintings from uploaded


images. You want to test the application and allow specific people to upload images for the next 24 hours. Not allusers have a Google Account. How should you have users upload images?

 

A.       Have users upload the images to Cloud Storage. Protect the bucket with a password that expires after 24hours.

B.       Have users upload the images to Cloud Storage using a signed URL that expires after 24 hours.

C.      Create an App Engine web application where users can upload images. Configure App Engine to disablethe application after 24 hours. Authenticate users via Cloud Identity.

D.      Create an App Engine web application where users can upload images for the next 24 hours.Authenticate users via Cloud Identity.

 

Correct Answer: B

Section: (none)

 

QUESTION 118

Your web application must comply with the requirements of the European Union's General Data ProtectionRegulation (GDPR). You are responsible for the technical architecture of your web application. What should you do?

 

A.       Ensure that your web application only uses native features and services of Google Cloud Platform,because Google already has various certifications and provides "pass-on" compliance when you use nativefeatures.

B.       Enable the relevant GDPR compliance setting within the GCPConsole for each of the services in use withinyour application.

C.      Ensure that Cloud Security Scanner is part of your test planning strategy in order to pick up anycompliance gaps.

D.      Define a design for the security of data in your web application that meets GDPR requirements.

 

Correct Answer: D

Section: (none)

 

QUESTION 119

You need to set up Microsoft SQL Server on GCP. Management requires that there's no downtime in case of a datacenter outage in any of the zones within a GCP region. What should you do?

 

A.       Configure a Cloud SQL instance with high availability enabled.

B.       Configure a Cloud Spanner instance with a regional instance configuration.

C.      Set up SQL Server on Compute Engine, using Always On Availability Groups using Windows FailoverClustering. Place nodes in different subnets.

D.      Set up SQL Server Always On Availability Groups using Windows Failover Clustering. Place nodes indifferent zones.

 

Correct Answer: A

Section: (none)

 

QUESTION 120

The development team has provided you with a Kubernetes Deployment file. You have no infrastructure yet andneed to deploy the application. What should you do?

 

A.       Use gcloud to create a Kubernetes cluster. Use Deployment Manager to create the deployment.

B.       Use gcloud to create a Kubernetes cluster. Use kubectl to create the deployment.

C.      Use kubectl to create a Kubernetes cluster. Use Deployment Manager to create the deployment.

D.      Use kubectl to create a Kubernetes cluster. Use kubectl to create the deployment.

 

Correct Answer: B

Section: (none)


有疑問?點擊這裡 聯絡專員,獲取即時技術支持!

2會員
89內容數
小豬科技 - 您的雲端伺服器解決方案 我們是領先的雲端伺服器供應商,提供來自 AWS、GCP、阿里雲、騰訊雲等頂級供應商的解決方案。我們主要提供高效能 VPS(虛擬機),以滿足客戶的多樣化需求。
留言0
查看全部
發表第一個留言支持創作者!
小豬科技的沙龍 的其他內容
AWS 雲端從業人員證照考古題彙整 20241007 QUESTION 438 Which of the following can be components of a VPC in the AWS Cloud? (Choose two.)
QUESTION 298 A company runs a public three-tier web application in a VPC. The application runs on AmazonEc2 instances across multiple Availability..
谷歌雲端架構師證照題庫彙整 20241005 QUESTION 75 Case Study: 4 - Dress4Win case study
AWS 雲端從業人員證照考古題彙整 20241005 QUESTION 382 A company wants to establish a private network connection between AWS and its corporate network.
谷歌雲端架構師證照題庫彙整 20241004 Google Cloud Platform(GCP 谷歌雲)全系列考古題,2024年最新題庫,持續更新,全網最完整。GCP 證照含金量高,自我進修、跨足雲端產業必備近期版本更新,隨時追蹤最新趨勢變化。
AWS 架構師證照考古題大全20241004 Amazon Web Service(AWS 亞馬遜)全系列考古題,2024年最新題庫,持續更新,全網最完整。AWS 證照含金量高,自我進修、跨足雲端產業必備近期版本更新,隨時追蹤最新趨勢變化。
AWS 雲端從業人員證照考古題彙整 20241007 QUESTION 438 Which of the following can be components of a VPC in the AWS Cloud? (Choose two.)
QUESTION 298 A company runs a public three-tier web application in a VPC. The application runs on AmazonEc2 instances across multiple Availability..
谷歌雲端架構師證照題庫彙整 20241005 QUESTION 75 Case Study: 4 - Dress4Win case study
AWS 雲端從業人員證照考古題彙整 20241005 QUESTION 382 A company wants to establish a private network connection between AWS and its corporate network.
谷歌雲端架構師證照題庫彙整 20241004 Google Cloud Platform(GCP 谷歌雲)全系列考古題,2024年最新題庫,持續更新,全網最完整。GCP 證照含金量高,自我進修、跨足雲端產業必備近期版本更新,隨時追蹤最新趨勢變化。
AWS 架構師證照考古題大全20241004 Amazon Web Service(AWS 亞馬遜)全系列考古題,2024年最新題庫,持續更新,全網最完整。AWS 證照含金量高,自我進修、跨足雲端產業必備近期版本更新,隨時追蹤最新趨勢變化。
你可能也想看
Google News 追蹤
Thumbnail
接下來第二部分我們持續討論美國總統大選如何佈局, 以及選前一週到年底的操作策略建議 分析兩位候選人政策利多/ 利空的板塊和股票
Thumbnail
🤔為什麼團長的能力是死亡筆記本? 🤔為什麼像是死亡筆記本呢? 🤨作者巧思-讓妮翁死亡合理的幾個伏筆
Thumbnail
這篇文章介紹了網站的整體架構以及開發時所使用的工具和套件,包括 Next.js、Tailwind CSS 和 socket.io 等。文章回顧了程式碼的重構與優化,幫助開發者提高工作效率,適合希望深入瞭解前端開發和網站架構的讀者。
Thumbnail
打開 jupyter notebook 寫一段 python 程式,可以完成五花八門的工作,這是玩程式最簡便的方式,其中可以獲得很多快樂,在現今這種資訊發達的時代,幾乎沒有門檻,只要願意,人人可享用。 下一步,希望程式可以隨時待命聽我吩咐,不想每次都要開電腦,啟動開發環境,只為完成一個重複性高
Thumbnail
當我們架好站、WebService測試完,接著就是測試區域網路連線啦~
Thumbnail
本文介紹了CSS Battle #172 交叉骷髏題目的解答技巧,包括圖層拆解的熟練程度和對小圓拆開處理等技巧。作者分享了100%的解法,鼓勵讀者分享自己的作法與交流。
Thumbnail
是的,身為前端工程師的基本功!還是需要時不時拿出來打磨一番! 很多大公司的切版與前端是分開的,但不能因為碰不到就不去理解,假如要系統性的調整樣式,那麼你就一定要懂基礎,就好像你要調整微前端的架構,總不能連包板工具的設定都不會吧! 回到正題,這系列文章每個禮拜三都會更新一題CSS Battle的題
Thumbnail
是的,身為前端工程師的基本功! 還是需要時不時拿出來打磨一番,這系列文章每個禮拜三都會更新一題CSS Battle的題目,歡迎與我交流喔!
Thumbnail
是的,身為前端工程師的基本功!還是需要時不時拿出來打磨一番! 這系列文章每個禮拜三都會更新一題CSS Battle的題目與解法
Thumbnail
是的,身為前端工程師的基本功!還是需要時不時拿出來打磨一番! 這系列文章每個禮拜三都會更新一題CSS Battle的題目解法
Thumbnail
接下來第二部分我們持續討論美國總統大選如何佈局, 以及選前一週到年底的操作策略建議 分析兩位候選人政策利多/ 利空的板塊和股票
Thumbnail
🤔為什麼團長的能力是死亡筆記本? 🤔為什麼像是死亡筆記本呢? 🤨作者巧思-讓妮翁死亡合理的幾個伏筆
Thumbnail
這篇文章介紹了網站的整體架構以及開發時所使用的工具和套件,包括 Next.js、Tailwind CSS 和 socket.io 等。文章回顧了程式碼的重構與優化,幫助開發者提高工作效率,適合希望深入瞭解前端開發和網站架構的讀者。
Thumbnail
打開 jupyter notebook 寫一段 python 程式,可以完成五花八門的工作,這是玩程式最簡便的方式,其中可以獲得很多快樂,在現今這種資訊發達的時代,幾乎沒有門檻,只要願意,人人可享用。 下一步,希望程式可以隨時待命聽我吩咐,不想每次都要開電腦,啟動開發環境,只為完成一個重複性高
Thumbnail
當我們架好站、WebService測試完,接著就是測試區域網路連線啦~
Thumbnail
本文介紹了CSS Battle #172 交叉骷髏題目的解答技巧,包括圖層拆解的熟練程度和對小圓拆開處理等技巧。作者分享了100%的解法,鼓勵讀者分享自己的作法與交流。
Thumbnail
是的,身為前端工程師的基本功!還是需要時不時拿出來打磨一番! 很多大公司的切版與前端是分開的,但不能因為碰不到就不去理解,假如要系統性的調整樣式,那麼你就一定要懂基礎,就好像你要調整微前端的架構,總不能連包板工具的設定都不會吧! 回到正題,這系列文章每個禮拜三都會更新一題CSS Battle的題
Thumbnail
是的,身為前端工程師的基本功! 還是需要時不時拿出來打磨一番,這系列文章每個禮拜三都會更新一題CSS Battle的題目,歡迎與我交流喔!
Thumbnail
是的,身為前端工程師的基本功!還是需要時不時拿出來打磨一番! 這系列文章每個禮拜三都會更新一題CSS Battle的題目與解法
Thumbnail
是的,身為前端工程師的基本功!還是需要時不時拿出來打磨一番! 這系列文章每個禮拜三都會更新一題CSS Battle的題目解法