谷歌數據工程師證照題庫彙整 20241025
Google Cloud Platform(GCP 谷歌雲)全系列考古題,2024年最新題庫,持續更新,全網最完整。GCP 證照含金量高,自我進修、跨足雲端產業必備近期版本更新,隨時追蹤最新趨勢變化。
QUESTION 1
Your company built a TensorFlow neutral-network model with a large number of neurons and layers. The model fits well for the training data. However, when tested against new data, it performs poorly. What method can you employ to address this?
A. Threading
B. Serialization
C. Dropout Methods
D. Dimensionality Reduction
Correct Answer: C
Section: (none)
QUESTION 2
You are building a model to make clothing recommendations. You know a user's fashion preference is likely to change over time, so you build a data pipeline to stream new data back to the model as it becomes available.
How should you use this data to train the model?
A. Continuously retrain the model on just the new data.
B. Continuously retrain the model on a combination of existing data and the new data.
C. Train on the existing data while using the new data as your test set.
D. Train on the new data while using the existing data as your test set.
Correct Answer: B
Section: (none)
QUESTION 3
You designed a database for patient records as a pilot project to cover a few hundred patients in three clinics. Your design useda single database table to represent all patients and their visits, and you used self- joins to generate reports. The server resource utilization was at 50%. Since then, the scope of the project has expanded. The database must now store 100 times more patient records. You can no longer run the reports, because they either take too long or they encounter errors with insufficient computeresources.
How should you adjust the database design?
A. Add capacity (memory and disk space) to the database server by the order of 200.
B. Shard the tables into smaller ones based on date ranges, and only generate reports with prespecified date ranges.
C. Normalize the master patient-record table into the patient table and the visits table, and create other necessary tables toavoid self-join.
D. Partition the table into smaller tables, with one for each clinic. Run queries against the smaller table pairs, and useunions for consolidated reports.
Correct Answer: C
Section: (none)
QUESTION 4
You create an important report for your large team in Google Data Studio 360. The report uses Google BigQuery as its datasource. You notice that visualizations are not showing data that is less than 1 hour old. What should you do?
A. Disable caching by editing the report settings.
B. Disable caching in BigQuery by editing table details.
C. Refresh your browser tab showing the visualizations.
D. Clear your browser history for the past hour then reload the tab showing the virtualizations.
Correct Answer: A
Section: (none)
QUESTION 5
An external customer provides you with a daily dump of data from their database. The data flows into Google Cloud Storage GCS as comma-separated values (CSV) files. You want to analyze this data in Google BigQuery, but the data could have rowsthat are formatted incorrectly or corrupted. How should you build this pipeline?
A. Use federated data sources, and check data in the SQL query.
B. Enable BigQuery monitoring in Google Stackdriver and create an alert.
C. Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.
D. Run a Google Cloud Dataflow batch pipeline to import the data into BigQuery, and push errors to another dead-lettertable for analysis.
Correct Answer: D
Section: (none)
QUESTION 6
Your weather app queries a database every 15 minutes to get the current temperature. The frontend is powered by Google App Engine and server millions of users. How should you design the frontend to respond to a database failure?
A. Issue a command to restart the database servers.
B. Retry the query with exponential backoff, up to a cap of 15 minutes.
C. Retry the query every second until it comes back online to minimize staleness of data.
D. Reduce the query frequency to once every hour until the database comes back online.
Correct Answer: B
Section: (none)
QUESTION 7
You are creating a model to predict housing prices. Due to budget constraints, you must run it on a single resource-constrained virtual machine. Which learning algorithm should you use?
A. Linear regression
B. Logistic classification
C. Recurrent neural network
D. Feedforward neural network
Correct Answer: A
Section: (none)
QUESTION 8
You are building new real-time data warehouse for your company and will use Google BigQuery streaming inserts. There is noguarantee that data will only be sent in once but you do have a unique ID for each row of data and an event timestamp. You want to ensure that duplicates are not included while interactively querying data. Which query type should you use?
A. Include ORDER BY DESK on timestamp column and LIMIT to 1.
B. Use GROUP BY on the unique ID column and timestamp column and SUM on the values.
C. Use the LAG window function with PARTITION by unique ID along with WHERE LAG IS NOT NULL.
D. Use the ROW_NUMBER window function with PARTITION by unique ID along with WHERE row equals 1.
Correct Answer: D
Section: (none)
QUESTION 9
Your company is using WILDCARD tables to query data across multiple tables with similar names. The SQL statement iscurrently failing with the following error:
# Syntax error : Expected end of statement but got "-" at [4:11] SELECT age FROM
bigquery-public-data.noaa_gsod.gsod WHERE
age != 99 AND_TABLE_SUFFIX = `1929' ORDERBY
age DESC
Which table name will make the SQL statement work correctly?
A. `bigquery-public-data.noaa_gsod.gsod`
B. bigquery-public-data.noaa_gsod.gsod*
C. `bigquery-public-data.noaa_gsod.gsod'*
D. `bigquery-public-data.noaa_gsod.gsod*`
Correct Answer: D
Section: (none)
QUESTION 10
Your company is in a highly regulated industry. One of your requirements is to ensure individual users have access only to the minimum amount of information required to do their jobs. You want to enforce this requirement with Google BigQuery. Whichthree approaches can you take? (Choose three.)
A. Disable writes to certain tables.
B. Restrict access to tables by role.
C. Ensure that the data is encrypted at all times.
D. Restrict BigQuery API access to approved users.
E. Segregate data across multiple tables or databases.
F. Use Google Stackdriver Audit Logging to determine policy violations.
Correct Answer: BDF
Section: (none)
QUESTION 11
You are designing a basket abandonment system for an ecommerce company. The system will send a message to a user basedon these rules:
No interaction by the user on the site for 1 hour
Has added more than $30 worth of products to the basket
Has not completed a transaction
You use Google Cloud Dataflow to process the data and decide if a message should be sent. How should you design thepipeline?
A. Use a fixed-time window with a duration of 60 minutes.
B. Use a sliding time window with a duration of 60 minutes.
C. Use a session window with a gap time duration of 60 minutes.
D. Use a global window with a time based trigger with a delay of 60 minutes.
Correct Answer: C
Section: (none)
QUESTION 12
Your company handles data processing for a number of different clients. Each client prefers to use their
own suite of analytics tools, with some allowing direct query access via Google BigQuery. You need to secure the data so thatclients cannot see each other's data. You want to ensure appropriate access to the data. Which three steps should you take?(Choose three.)
A. Load data into different partitions.
B. Load data into a different dataset for each client.
C. Put each client's BigQuery dataset into a different table.
D. Restrict a client's dataset to approved users.
E. Only allow a service account to access the datasets.
F. Use the appropriate identity and access management (IAM) roles for each client's users.
Correct Answer: BDF
Section: (none)
QUESTION 13
You want to process payment transactions in a point-of-sale application that will run on Google Cloud Platform. Your user basecould grow exponentially, but you do not want to manage infrastructure scaling. Which Google database service should you use?
A. Cloud SQL
B. BigQuery
C. Cloud Bigtable
D. Cloud Datastore
Correct Answer: D
Section: (none)
QUESTION 14
You want to use a database of information about tissue samples to classify future tissue samples as either normal or mutated.You are evaluating an unsupervised anomaly detection method for classifying the tissue samples. Which two characteristic supportthis method? (Choose two.)
A. There are very few occurrences of mutations relative to normal samples.
B. There are roughly equal occurrences of both normal and mutated samples in the database.
C. You expect future mutations to have different features from the mutated samples in the database.
D. You expect future mutations to have similar features to the mutated samples in the database.
E. You already have labels for which samples are mutated and which are normal in the database.
Correct Answer: AD
Section: (none)
QUESTION 15
You need to store and analyze social media postings in Google BigQuery at a rate of 10,000 messages per minute in near real-time. Initially, design the application to use streaming inserts for individual postings.
Your application also performs data aggregations right after the streaming inserts. You discover that the queries afterstreaming inserts do not exhibit strong consistency, and reports from the queries might miss in-flight data.
How can you adjust your application design?
A. Re-write the application to load accumulated data every 2 minutes.
B. Convert the streaming insert code to batch load for individual messages.
C. Load the original message to Google Cloud SQL, and export the table every hour to BigQuery via streaming inserts.
D. Estimate the average latency for data availability after streaming inserts, and always run queries after waiting twice as long.
Correct Answer: D
Section: (none)
QUESTION 16
Your startup has never implemented a formal security policy. Currently, everyone in the company has access to the datasetsstored in Google BigQuery. Teams have freedom to use the service as they see fit, and they have not documented their usecases. You have been asked to secure the data warehouse. You need to discover what everyone is doing. What should you dofirst?
A. Use Google Stackdriver Audit Logs to review data access.
B. Get the identity and access management IIAM) policy of each table
C. Use Stackdriver Monitoring to see the usage of BigQuery query slots.
D. Use the Google Cloud Billing API to see what account the warehouse is being billed to.
Correct Answer: A
Section: (none)
QUESTION 17
Your company is migrating their 30-node Apache Hadoop cluster to the cloud. They want to re-use Hadoop jobs they have already created and minimize the management of the cluster as much as possible. They also want to be able to persist databeyond the life of the cluster. What should you do?
A. Create a Google Cloud Dataflow job to process the data.
B. Create a Google Cloud Dataproc cluster that uses persistent disks for HDFS.
C. Create a Hadoop cluster on Google Compute Engine that uses persistent disks.
D. Create a Cloud Dataproc cluster that uses the Google Cloud Storage connector.
E. Create a Hadoop cluster on Google Compute Engine that uses Local SSD disks.
Correct Answer: D
Section: (none)
QUESTION 18
Business owners at your company have given you a database of bank transactions. Each row contains the user ID, transaction type, transaction location, and transaction amount. They ask you to investigate what type of machine learning can be applied to the data. Which three machine learning applications can you use? (Choose three.)
A. Supervised learning to determine which transactions are most likely to be fraudulent.
B. Unsupervised learning to determine which transactions are most likely to be fraudulent.
C. Clustering to divide the transactions into N categories based on feature similarity.
D. Supervised learning to predict the location of a transaction.
E. Reinforcement learning to predict the location of a transaction.
F. Unsupervised learning to predict the location of a transaction.
Correct Answer: BCD
Section: (none)
QUESTION 19
Your company's on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster toGoogle Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. Whatshould you do?
A. Put the data into Google Cloud Storage.
B. Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.
C. Tune the Cloud Dataproc cluster so that there is just enough disk for all data.
D. Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.
Correct Answer: A
Section: (none) QUESTION 20
You work for a car manufacturer and have set up a data pipeline using Google Cloud Pub/Sub to capture anomalous sensorevents. You are using a push subscription in Cloud Pub/Sub that calls a custom HTTPS endpoint that you have created to take action of these anomalous events as they occur. Your custom HTTPS endpoint keeps getting an inordinate amount of duplicate messages. What is the most likely cause of these duplicate messages?
A. The message body for the sensor event is too large.
B. Your custom endpoint has an out-of-date SSL certificate.
C. The Cloud Pub/Sub topic has too many messages published to it.
D. Your custom endpoint is not acknowledging messages within the acknowledgement deadline.
Correct Answer: D
Section: (none)
QUESTION 21
Your company uses a proprietary system to send inventory data every 6 hours to a data ingestion service in the cloud. Transmitted data includes a payload of several fields and the timestamp of the transmission. If there are any concerns about a transmission,the system re-transmits the data. How should you deduplicate the data most efficiency?
A. Assign global unique identifiers (GUID) to each data entry.
B. Compute the hash value of each data entry, and compare it with all historical data.
C. Store each data entry as the primary key in a separate database and apply an index.
D. Maintain a database table to store the hash value and other metadata for each data entry.
Correct Answer: A
Section: (none)
QUESTION 22
Your company has hired a new data scientist who wants to perform complicated analyses across very large datasets stored in Google Cloud Storage and in a Cassandra cluster on Google Compute Engine. The scientist primarily wants to create labelled data sets for machine learning projects, along with some visualization tasks. She reports that her laptop is not powerful enough toperform her tasks and it is slowing her down. You want to help her perform her tasks. What should you do?
A. Run a local version of Jupiter on the laptop.
B. Grant the user access to Google Cloud Shell.
C. Host a visualization tool on a VM on Google Compute Engine.
D. Deploy Google Cloud Datalab to a virtual machine (VM) on Google Compute Engine.
Correct Answer: D
Section: (none)
QUESTION 23
You are deploying 10,000 new Internet of Things devices to collect temperature data in your warehouses globally. You need toprocess, store and analyze these very large datasets in real time. What should you do?
A. Send the data to Google Cloud Datastore and then export to BigQuery.
B. Send the data to Google Cloud Pub/Sub, stream Cloud Pub/Sub to Google Cloud Dataflow, and store the data in GoogleBigQuery.
C. Send the data to Cloud Storage and then spin up an Apache Hadoop cluster as needed in Google Cloud Dataproc wheneveranalysis is required.
D. Export logs in batch to Google Cloud Storage and then spin up a Google Cloud SQL instance, import the data from CloudStorage, and run an analysis as needed.
Correct Answer: B
Section: (none) QUESTION 24
You have spent a few days loading data from comma-separated values (CSV) files into the Google BigQuery tableCLICK_STREAM. The column DT stores the epoch time of click events. For convenience, you chose a simple schema where every field is treated as the STRING type. Now, you want to compute web session durations of users who visit your site, and you want to change its data type to the TIMESTAMP. You want to minimize the migration effort without making future queries computationally expensive. What should you do?
A. Delete the table CLICK_STREAM, and then re-create it such that the column DT is of the TIMESTAMP type.
Reload the data.
B. Add a column TS of the TIMESTAMP type to the table CLICK_STREAM, and populate the numeric values from thecolumn TS for each row. Reference the column TS instead of the column DT from now on.
C. Create a view CLICK_STREAM_V, where strings from the column DT are cast into TIMESTAMP values.
Reference the view CLICK_STREAM_V instead of the table CLICK_STREAM from now on.
D. Add two columns to the table CLICK STREAM: TS of the TIMESTAMP type and IS_NEW of the BOOLEAN type. Reloadall data in append mode. For each appended row, set the value of IS_NEW to true. For future queries, reference the column TS instead of the column DT, with the WHERE clause ensuring that the value of IS_NEW must be true.
E. Construct a query to return every row of the table CLICK_STREAM, while using the built-in function to cast strings from the column DT into TIMESTAMP values. Run the query into a destination table , in which the column TS is theTIMESTAMP type. Reference the table NEW_CLICK_STREAM
instead of the table CLICK_STREAM from now on. In the future, new data is loaded NEW_CLICK_STREAM
into the table NEW_CLICK_STREAM.
Correct Answer: E
Section: (none)
QUESTION 25
You want to use Google Stackdriver Logging to monitor Google BigQuery usage. You need an instant notification to be sent toyour monitoring tool when new data is appended to a certain table using an insert job, but you do not want to receive notificationsfor other tables. What should you do?
A. Make a call to the Stackdriver API to list all logs, and apply an advanced filter.
B. In the Stackdriver logging admin interface, and enable a log sink export to BigQuery.
C. In the Stackdriver logging admin interface, enable a log sink export to Google Cloud Pub/Sub, and subscribe to thetopic from your monitoring tool.
D. Using the Stackdriver API, create a project sink with advanced log filter to export to Pub/Sub, and subscribe to thetopic from your monitoring tool.
Correct Answer: D
Section: (none)
QUESTION 26
You are working on a sensitive project involving private user data. You have set up a project on Google Cloud Platform to house your work internally. An external consultant is going to assist with coding a complex transformation in a Google Cloud Dataflow pipeline for your project. How should you maintain users' privacy?
A. Grant the consultant the Viewer role on the project.
B. Grant the consultant the Cloud Dataflow Developer role on the project.
C. Create a service account and allow the consultant to log on with it.
D. Create an anonymized sample of the data for the consultant to work with in a different project.
Correct Answer: B
Section: (none)
QUESTION 27
You are building a model to predict whether or not it will rain on a given day. You have thousands of input
features and want to see if you can improve training speed by removing some features while having a minimum effect on modelaccuracy. What can you do?
A. Eliminate features that are highly correlated to the output labels.
B. Combine highly co-dependent features into one representative feature.
C. Instead of feeding in each feature individually, average their values in batches of 3.
D. Remove the features that have null values for more than 50% of the training records.
Correct Answer: B
Section: (none)
QUESTION 28
Your company is performing data preprocessing for a learning algorithm in Google Cloud Dataflow. Numerous data logs arebeing are being generated during this step, and the team wants to analyze them. Due to the dynamic nature of the campaign, the data is growing exponentially every hour. The data scientists have written the following code to read the data for a new keyfeatures in the logs.
BigQueryIO.Read
.named("ReadLogData")
.from("clouddataflow-readonly:samples.log_data")
You want to improve the performance of this data read. What should you do?
A. Specify the TableReference object in the code.
B. Use .fromQuery operation to read specific fields from the table.
C. Use of both the Google BigQuery TableSchema and TableFieldSchema classes.
D. Call a transform that returns TableRow objects, where each element in the PCollection represents a single row in thetable.
Correct Answer: B
Section: (none)
QUESTION 29
Your company is streaming real-time sensor data from their factory floor into Bigtable and they have noticed extremely poor performance. How should the row key be redesigned to improve Bigtable performance on queries that populate real-timedashboards?
A. Use a row key of the form <timestamp>.
B. Use a row key of the form <sensorid>.
C. Use a row key of the form <timestamp>#<sensorid>.
D. Use a row key of the form >#<sensorid>#<timestamp>.
Correct Answer: D
Section: (none)
QUESTION 30
Your company's customer and order databases are often under heavy load. This makes performing analytics against them difficult without harming operations. The databases are in a MySQL cluster, with nightly backups taken using mysqldump. Youwant to perform analytics with minimal impact on operations. What should you do?
A. Add a node to the MySQL cluster and build an OLAP cube there.
B. Use an ETL tool to load the data from MySQL into Google BigQuery.
C. Connect an on-premises Apache Hadoop cluster to MySQL and perform ETL.
D. Mount the backups to Google Cloud SQL, and then process the data using Google Cloud Dataproc.
Correct Answer: B
Section: (none) QUESTION 31
You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do?
A. Update the current pipeline and use the drain flag.
B. Update the current pipeline and provide the transform mapping JSON object.
C. Create a new pipeline that has the same Cloud Pub/Sub subscription and cancel the old pipeline.
D. Create a new pipeline that has a new Cloud Pub/Sub subscription and cancel the old pipeline.
Correct Answer: D
Section: (none)
QUESTION 32
Your company is running their first dynamic campaign, serving different offers by analyzing real-time data during the holiday season. The data scientists are collecting terabytes of data that rapidly grows every hour during their 30-day campaign. They areusing Google Cloud Dataflow to preprocess the data and collect the feature (signals) data that is needed for the machine learning model in Google Cloud Bigtable. The team is observing suboptimal performance with reads and writes of their initial load of 10 TB of data. They want to improve this performance while minimizing cost. What should they do?
A. Redefine the schema by evenly distributing reads and writes across the row space of the table.
B. The performance issue should be resolved over time as the site of the BigDate cluster is increased.
C. Redesign the schema to use a single row key to identify values that need to be updated frequently in the cluster.
D. Redesign the schema to use row keys based on numeric IDs that increase sequentially per user viewing the offers.
Correct Answer: A
Section: (none)
QUESTION 33
Your software uses a simple JSON format for all messages. These messages are published to Google Cloud Pub/Sub, thenprocessed with Google Cloud Dataflow to create a real-time dashboard for the CFO. During testing, you notice that some messages are missing in the dashboard. You check the logs, and all messages are being published to Cloud Pub/Subsuccessfully. What should you do next?
A. Check the dashboard application to see if it is not displaying correctly.
B. Run a fixed dataset through the Cloud Dataflow pipeline and analyze the output.
C. Use Google Stackdriver Monitoring on Cloud Pub/Sub to find the missing messages.
D. Switch Cloud Dataflow to pull messages from Cloud Pub/Sub instead of Cloud Pub/Sub pushing messages to CloudDataflow.
Correct Answer: C
Section: (none)
QUESTION 34
Flowlogistic Case Study Company Overview
Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck,aircraft, and oceanic shipping.
Company Background
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable todeploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogisticwants to
further analyze their orders and shipments to determine how best to deploy their resources. Solution Concept
Flowlogistic wants to implement two concepts using the cloud:
Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when ashipment will be delayed.
Existing Technical Environment
Flowlogistic architecture resides in a single data center:
Databases
8 physical servers in 2 clusters
- SQL Server ?user data, inventory, static data 3 physical servers
- Cassandra ?metadata, tracking messages
10 Kafka servers ?tracking message aggregation and batch insert
Application servers ?customer front end, middleware for order/customs 60 virtual machines across 20 physical servers
- Tomcat ?Java services
- Nginx ?static content
- Batch servers Storage appliances
- iSCSI for virtual machine (VM) hosts
- Fibre Channel storage area network (FC SAN) ?SQL server storage
- Network-attached storage (NAS) image storage, logs, backups 10 Apache Hadoop /Spark servers
- Core Data Lake
- Data analysis workloads 20 miscellaneousservers
- Jenkins, monitoring, bastion hosts,
Business Requirements
Build a reliable and reproducible environment with scaled panty of production. Aggregate data in acentralized Data Lake for analysis
Use historical data to perform predictive analytics on future shipments Accurately track everyshipment worldwide using proprietary technology
Improve business agility and speed of innovation through rapid provisioning of new resources Analyze and optimizearchitecture for performance in the cloud
Migrate fully to the cloud if all other requirements are met
Technical Requirements
Handle both streaming and batch data Migrate existingHadoop workloads
Ensure architecture is scalable and elastic to meet the changing demands of the company. Use managed serviceswhenever possible
Encrypt data flight and at rest
Connect a VPN between the production data center and cloud environment SEO Statement
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. Weare efficient at moving shipments around the world, but we are inefficient at moving data around.
We need to organize our information so we can more easily understand where our customers are and what they are shipping.
CTO Statement
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO' s tracking technology.
CFO Statement
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipmentsare at all times has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to buildingout a server environment.
Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Sparkworkloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to bothworkloads. What should they do?
A. Store the common data in BigQuery as partitioned tables.
B. Store the common data in BigQuery and expose authorized views.
C. Store the common data encoded as Avro in Google Cloud Storage.
D. Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.
Correct Answer: C
Section: (none)
QUESTION 35
Flowlogistic Case Study Company Overview
Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck,aircraft, and oceanic shipping.
Company Background
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable todeploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.
Solution Concept
Flowlogistic wants to implement two concepts using the cloud:
Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
Perform analytics on all their orders and shipment logs, which contain both structured and unstructured
data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learnearlier when a shipment will be delayed.
Existing Technical Environment
Flowlogistic architecture resides in a single data center:
Databases
8 physical servers in 2 clusters
- SQL Server ?user data, inventory, static data 3 physical servers
- Cassandra ?metadata, tracking messages
10 Kafka servers ?tracking message aggregation and batch insert
Application servers ?customer front end, middleware for order/customs 60 virtual machines across 20 physical servers
- Tomcat ?Java services
- Nginx ?static content
- Batch servers Storage appliances
- iSCSI for virtual machine (VM) hosts
- Fibre Channel storage area network (FC SAN) ?SQL server storage
- Network-attached storage (NAS) image storage, logs, backups 10 Apache Hadoop /Spark servers
- Core Data Lake
- Data analysis workloads 20 miscellaneousservers
- Jenkins, monitoring, bastion hosts, BusinessRequirements
Build a reliable and reproducible environment with scaled panty of production. Aggregate data in acentralized Data Lake for analysis
Use historical data to perform predictive analytics on future shipments Accurately track everyshipment worldwide using proprietary technology
Improve business agility and speed of innovation through rapid provisioning of new resources Analyze and optimizearchitecture for performance in the cloud
Migrate fully to the cloud if all other requirements are met
Technical Requirements
Handle both streaming and batch data Migrate existingHadoop workloads
Ensure architecture is scalable and elastic to meet the changing demands of the company. Use managed serviceswhenever possible
Encrypt data flight and at rest
Connect a VPN between the production data center and cloud environment
SEO Statement
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. Weare efficient at moving shipments around the world, but we are inefficient at moving data around.
We need to organize our information so we can more easily understand where our customers are and what
they are shipping.
CTO Statement
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO' s tracking technology.
CFO Statement
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipmentsare at all times has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to buildingout a server environment.
Flowlogistic's management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system. You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietarytracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably. Which combination of GCP products should you choose?
A. Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage
B. Cloud Pub/Sub, Cloud Dataflow, and Local SSD
C. Cloud Pub/Sub, Cloud SQL, and Cloud Storage
D. Cloud Load Balancing, Cloud Dataflow, and Cloud Storage
Correct Answer: A
Section: (none)
QUESTION 36
Flowlogistic Case Study Company Overview
Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck,aircraft, and oceanic shipping.
Company Background
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable todeploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.
Solution Concept
Flowlogistic wants to implement two concepts using the cloud:
Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
Perform analytics on all their orders and shipment logs, which contain both structured and unstructured
data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learnearlier when a shipment will be delayed.
Existing Technical Environment
Flowlogistic architecture resides in a single data center: Databases
8 physical servers in 2 clusters
- SQL Server ?user data, inventory, static data
3 physical servers
- Cassandra ?metadata, tracking messages
10 Kafka servers ?tracking message aggregation and batch insert
Application servers ?customer front end, middleware for order/customs 60 virtual machines across 20 physical servers
- Tomcat ?Java services
- Nginx ?static content
- Batch servers Storage appliances
- iSCSI for virtual machine (VM) hosts
- Fibre Channel storage area network (FC SAN) ?SQL server storage
- Network-attached storage (NAS) image storage, logs, backups 10 Apache Hadoop /Spark servers
- Core Data Lake
- Data analysis workloads 20 miscellaneousservers
- Jenkins, monitoring, bastion hosts, BusinessRequirements
Build a reliable and reproducible environment with scaled panty of production. Aggregate data in acentralized Data Lake for analysis
Use historical data to perform predictive analytics on future shipments Accurately track everyshipment worldwide using proprietary technology
Improve business agility and speed of innovation through rapid provisioning of new resources Analyze and optimizearchitecture for performance in the cloud
Migrate fully to the cloud if all other requirements are met
Technical Requirements
Handle both streaming and batch data Migrate existingHadoop workloads
Ensure architecture is scalable and elastic to meet the changing demands of the company. Use managed serviceswhenever possible
Encrypt data flight and at rest
Connect a VPN between the production data center and cloud environment
SEO Statement
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. Weare efficient at moving shipments around the world, but we are inefficient at moving data around.
We need to organize our information so we can more easily understand where our customers are and what they are shipping.
CTO Statement
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO' s tracking technology.
CFO Statement
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipmentsare at all times has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to buildingout a server environment.
Flowlogistic's CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. Thisteam is not very technical, so they've purchased a visualization tool to simplify the creation of BigQuery reports. However, they've been overwhelmed by all the data in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way. What should you do?
A. Export the data into a Google Sheet for virtualization.
B. Create an additional table with only the necessary columns.
C. Create a view on the table to present to the virtualization tool.
D. Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.
Correct Answer: C
Section: (none)
QUESTION 37
Flowlogistic Case Study Company Overview
Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck,aircraft, and oceanic shipping.
Company Background
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable todeploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.
Solution Concept
Flowlogistic wants to implement two concepts using the cloud:
Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
Perform analytics on all their orders and shipment logs, which contain both structured and unstructured
data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learnearlier when a shipment will be delayed.
Existing Technical Environment
Flowlogistic architecture resides in a single data center: Databases
8 physical servers in 2 clusters
- SQL Server ?user data, inventory, static data 3 physical servers
- Cassandra ?metadata, tracking messages
10 Kafka servers ?tracking message aggregation and batch insert Application servers ?customer front end, middleware for order/customs
60 virtual machines across 20 physical servers
- Tomcat ?Java services
- Nginx ?static content
- Batch servers Storage appliances
- iSCSI for virtual machine (VM) hosts
- Fibre Channel storage area network (FC SAN) ?SQL server storage
- Network-attached storage (NAS) image storage, logs, backups 10 Apache Hadoop /Spark servers
- Core Data Lake
- Data analysis workloads 20 miscellaneousservers
- Jenkins, monitoring, bastion hosts, BusinessRequirements
Build a reliable and reproducible environment with scaled panty of production. Aggregate data in acentralized Data Lake for analysis
Use historical data to perform predictive analytics on future shipments Accurately track everyshipment worldwide using proprietary technology
Improve business agility and speed of innovation through rapid provisioning of new resources Analyze and optimizearchitecture for performance in the cloud
Migrate fully to the cloud if all other requirements are met
Technical Requirements
Handle both streaming and batch data Migrate existingHadoop workloads
Ensure architecture is scalable and elastic to meet the changing demands of the company. Use managed serviceswhenever possible
Encrypt data flight and at rest
Connect a VPN between the production data center and cloud environment SEO Statement
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. Weare efficient at moving shipments around the world, but we are inefficient at moving data around.
We need to organize our information so we can more easily understand where our customers are and what they are shipping.
CTO Statement
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO' s tracking technology.
CFO Statement
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipmentsare at all times has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to buildingout a server environment.
Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package- tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriberapplication will then process the messages for real-time reporting and
store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.
Which approach should you take?
A. Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.
B. Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to ClodPub/Sub.
C. Use the NOW () function in BigQuery to record the event's time.
D. Use the automatically generated timestamp from Cloud Pub/Sub to order the data.
Correct Answer: B
Section: (none)
QUESTION 38
MJTelco Case Study Company Overview
MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speedbackbone links with inexpensive hardware.
Company Background
Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communicationschallenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-timeanalysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost. Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfectenvironment to support their needs.
Solution Concept
MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs: Scale and harden theirPoC to support significantly more data flows generated when they ramp to more
than 50,000 installations.
Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology
definition.
MJTelco will also use three separate operating environments ?development/test, staging, and production ? to meet the needs ofrunning experiments, deploying new features, and serving production customers.
Business Requirements
Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable,distributed telecom user community.
Ensure security of their proprietary data to protect their leading-edge machine learning and analysis. Provide reliable andtimely access to data for analysis from distributed research workers
Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.
Technical Requirements
Ensure secure and efficient transport and storage of telemetry data
Rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each.
Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately
100m records/day
Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and inproduction learning cycles.
CEO Statement
Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organizedto be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meetour reliability and capacity commitments.
CTO Statement
Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also needenvironments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.
CFO Statement
The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud'smachine learning will allow our quantitative researchers to work on our high- value problems instead of problems with our datapipelines.
MJTelco's Google Cloud Dataflow pipeline is now ready to start receiving data from the 50,000 installations. You want to allow Cloud Dataflow to scale its compute power up as required. Which Cloud Dataflow pipeline configuration setting should youupdate?
A. The zone
B. The number of workers
C. The disk size per worker
D. The maximum number of workers
Correct Answer: D
Section: (none)
QUESTION 39
MJTelco Case Study Company Overview
MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speedbackbone links with inexpensive hardware.
Company Background
Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communicationschallenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-timeanalysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost. Their management and operations teams are situated all around the globe creating many-to-many relationship between data consumers and provides in their system. After careful consideration, they decided public cloud is the perfectenvironment to support their needs.
Solution Concept
MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs: Scale and harden theirPoC to support significantly more data flows generated when they ramp to more
than 50,000 installations.
Refine their machine-learning cycles to verify and improve the dynamic models they use to control topology
definition.
MJTelco will also use three separate operating environments ?development/test, staging, and production ? to meet the needs ofrunning experiments, deploying new features, and serving production customers.
Business Requirements
Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable,distributed telecom user community.
Ensure security of their proprietary data to protect their leading-edge machine learning and analysis. Provide reliable andtimely access to data for analysis from distributed research workers
Maintain isolated environments that support rapid iteration of their machine-learning models without affecting their customers.
Technical Requirements
Ensure secure and efficient transport and storage of telemetry data
Rapidly scale instances to support between 10,000 and 100,000 data providers with multiple flows each. Allow analysis and presentation against data tables tracking up to 2 years of data storing approximately 100m records/day
Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and inproduction learning cycles.
CEO Statement
Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organizedto be highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meetour reliability and capacity commitments.
CTO Statement
Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also needenvironments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.
CFO Statement
The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud'smachine learning will allow our quantitative researchers to work on our high- value problems instead of problems with our datapipelines.
You need to compose visualizations for operations teams with the following requirements:
The report must include telemetry data from all 50,000 installations for the most resent 6 weeks (sampling
once every minute).
The report must not be more than 3 hours delayed from live data. The actionable reportshould only show suboptimal links.
Most suboptimal links should be sorted to the top.
Suboptimal links can be grouped and filtered by regional geography. User response time toload the report must be <5 seconds.
Which approach meets the requirements?
A. Load the data into Google Sheets, use formulas to calculate a metric, and use filters/sorting to show only suboptimallinks in a table.
B. Load the data into Google BigQuery tables, write Google Apps Script that queries the data, calculates the metric, andshows only suboptimal rows in a table in Google Sheets.
C. Load the data into Google Cloud Datastore tables, write a Google App Engine Application that queries all rows, applies a function to derive the metric, and then renders results in a table using the Google charts and visualization API.
D. Load the data into Google BigQuery tables, write a Google Data Studio 360 report that connects to your data, calculates ametric, and then uses a filter expression to show only suboptimal rows in a table.
Correct Answer: D
Section: (none)
QUESTION 40
MJTelco Case Study Company Overview
MJTelco is a startup that plans to build networks in rapidly growing, underserved markets around the world. The company has patents for innovative optical communications hardware. Based on these patents, they can create many reliable, high-speedbackbone links with inexpensive hardware.
Company Background
Founded by experienced telecom executives, MJTelco uses technologies originally developed to overcome communicationschallenges in space. Fundamental to their operation, they need to create a distributed data infrastructure that drives real-timeanalysis and incorporates machine learning to continuously optimize their topologies. Because their hardware is inexpensive, they plan to overdeploy the network allowing them to account for the impact of dynamic regional politics on location availability and cost. Their management and operations teams are situated all around the globe creating many-to-many relationship between dataconsumers and provides in their system. After careful consideration, they decided public cloud is the perfect environment to supporttheir needs.
Solution Concept
MJTelco is running a successful proof-of-concept (PoC) project in its labs. They have two primary needs: Scale and harden their PoCto support significantly more data flows generated when they ramp to more
than 50,000 installations. Refine their machine-learning cycles to verify and improve the dynamic models they use to control topologydefinition. MJTelco will also use three separate operating environments ?development/test, staging, and production ? to meet theneeds of running experiments, deploying new features, and serving production customers.
Business Requirements
Scale up their production environment with minimal cost, instantiating resources when and where needed in an unpredictable,distributed telecom user community. Ensure security of their proprietary data to protect their leading-edge machine learning andanalysis. Provide reliable and timely access to data for analysis from distributed research workers Maintain isolated environmentsthat support rapid iteration of their machine-learning models without affecting their customers.
Technical Requirements
Ensure secure and efficient transport and storage of telemetry data. Rapidly scale instances to support between 10,000 and 100,000data providers with multiple flows each. Allow analysis and presentation against data tables tracking up to 2 years of data storingapproximately 100m records/day
Support rapid iteration of monitoring infrastructure focused on awareness of data pipeline problems both in telemetry flows and inproduction learning cycles. CEO Statement.
Our business model relies on our patents, analytics and dynamic machine learning. Our inexpensive hardware is organized tobe highly reliable, which gives us cost advantages. We need to quickly stabilize our large distributed data pipelines to meet our reliability and capacity commitments.
CTO Statement
Our public cloud services must operate as advertised. We need resources that scale and keep our data secure. We also needenvironments in which our data scientists can carefully study and quickly adapt our models. Because we rely on automation to process our data, we also need our development and test environments to work as we iterate.
CFO Statement
The project is too large for us to maintain the hardware and software required for the data and analysis. Also, we cannot afford to staff an operations team to monitor so many data feeds, so we will rely on automation and infrastructure. Google Cloud's machinelearning will allow our quantitative researchers to work on our high- value problems instead of problems with our data pipelines.
You create a new report for your large team in Google Data Studio 360. The report uses Google BigQuery as its data source. It is company policy to ensure employees can view only the data associated with their region, so you create and populate a table foreach region. You need to enforce the regional access policy to the data.
Which two actions should you take? (Choose two.)
A. Ensure all the tables are included in global dataset.
B. Ensure each table is included in a dataset for a region.
C. Adjust the settings for each table to allow a related region-based security group view access.
D. Adjust the settings for each view to allow a related region-based security group view access.
E. Adjust the settings for each dataset to allow a related region-based security group view access.
Correct Answer: BE
Section: (none)