Winter Special Sale Limited Time 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 2493360325

Good News !!! CCA-500 Cloudera Certified Administrator for Apache Hadoop (CCAH) is now Stable and With Pass Result

CCA-500 Practice Exam Questions and Answers

Cloudera Certified Administrator for Apache Hadoop (CCAH)

Last Update 15 hours ago
Total Questions : 60

Cloudera Certified Administrator for Apache Hadoop (CCAH) is stable now with all latest exam questions are added 15 hours ago. Incorporating CCA-500 practice exam questions into your study plan is more than just a preparation strategy.

CCA-500 exam questions often include scenarios and problem-solving exercises that mirror real-world challenges. Working through CCA-500 dumps allows you to practice pacing yourself, ensuring that you can complete all Cloudera Certified Administrator for Apache Hadoop (CCAH) practice test within the allotted time frame.

CCA-500 PDF

CCA-500 PDF (Printable)
$48
$119.99

CCA-500 Testing Engine

CCA-500 PDF (Printable)
$56
$139.99

CCA-500 PDF + Testing Engine

CCA-500 PDF (Printable)
$70.8
$176.99
Question # 1

You are running a Hadoop cluster with a NameNode on host mynamenode. What are two ways to determine available HDFS space in your cluster?

Options:

A.  

Run hdfs fs –du / and locate the DFS Remaining value

B.  

Run hdfs dfsadmin –report and locate the DFS Remaining value

C.  

Run hdfs dfs / and subtract NDFS Used from configured Capacity

D.  

Connect to http://mynamenode:50070/dfshealth.jsp and locate the DFS remaining value

Discussion 0
Question # 2

You have installed a cluster HDFS and MapReduce version 2 (MRv2) on YARN. You have no dfs.hosts entry(ies) in your hdfs-site.xml configuration file. You configure a new worker node by setting fs.default.name in its configuration files to point to the NameNode on your cluster, and you start the DataNode daemon on that worker node. What do you have to do on the cluster to allow the worker node to join, and start sorting HDFS blocks?

Options:

A.  

Without creating a dfs.hosts file or making any entries, run the commands hadoop.dfsadmin-refreshModes on the NameNode

B.  

Restart the NameNode

C.  

Creating a dfs.hosts file on the NameNode, add the worker Node’s name to it, then issue the command hadoop dfsadmin –refresh Nodes = on the Namenode

D.  

Nothing; the worker node will automatically join the cluster when NameNode daemon is started

Discussion 0
Question # 3

Which two features does Kerberos security add to a Hadoop cluster? (Choose two)

Options:

A.  

User authentication on all remote procedure calls (RPCs)

B.  

Encryption for data during transfer between the Mappers and Reducers

C.  

Encryption for data on disk (“at rest”)

D.  

Authentication for user access to the cluster against a central server

E.  

Root access to the cluster for users hdfs and mapred but non-root access for clients

Discussion 0
Question # 4

Your Hadoop cluster is configuring with HDFS and MapReduce version 2 (MRv2) on YARN. Can you configure a worker node to run a NodeManager daemon but not a DataNode daemon and still have a functional cluster?

Options:

A.  

Yes. The daemon will receive data from the NameNode to run Map tasks

B.  

Yes. The daemon will get data from another (non-local) DataNode to run Map tasks

C.  

Yes. The daemon will receive Map tasks only

D.  

Yes. The daemon will receive Reducer tasks only

Discussion 0
Question # 5

Each node in your Hadoop cluster, running YARN, has 64GB memory and 24 cores. Your yarn.site.xml has the following configuration:

yarn.nodemanager.resource.memory-mb

32768

yarn.nodemanager.resource.cpu-vcores

12

You want YARN to launch no more than 16 containers per node. What should you do?

Options:

A.  

Modify yarn-site.xml with the following property:

yarn.scheduler.minimum-allocation-mb

2048

B.  

Modify yarn-sites.xml with the following property:

yarn.scheduler.minimum-allocation-mb

4096

C.  

Modify yarn-site.xml with the following property:

yarn.nodemanager.resource.cpu-vccores

D.  

No action is needed: YARN’s dynamic resource allocation automatically optimizes the node memory and cores

Discussion 0
Question # 6

You use the hadoop fs –put command to add a file “sales.txt” to HDFS. This file is small enough that it fits into a single block, which is replicated to three nodes in your cluster (with a replication factor of 3). One of the nodes holding this file (a single block) fails. How will the cluster handle the replication of file in this situation?

Options:

A.  

The file will remain under-replicated until the administrator brings that node back online

B.  

The cluster will re-replicate the file the next time the system administrator reboots the NameNode daemon (as long as the file’s replication factor doesn’t fall below)

C.  

This will be immediately re-replicated and all other HDFS operations on the cluster will halt until the cluster’s replication values are resorted

D.  

The file will be re-replicated automatically after the NameNode determines it is under-replicated based on the block reports it receives from the NameNodes

Discussion 0
Question # 7

Your Hadoop cluster contains nodes in three racks. You have not configured the dfs.hosts property in the NameNode’s configuration file. What results?

Options:

A.  

The NameNode will update the dfs.hosts property to include machines running the DataNode daemon on the next NameNode reboot or with the command dfsadmin –refreshNodes

B.  

No new nodes can be added to the cluster until you specify them in the dfs.hosts file

C.  

Any machine running the DataNode daemon can immediately join the cluster

D.  

Presented with a blank dfs.hosts property, the NameNode will permit DataNodes specified in mapred.hosts to join the cluster

Discussion 0
Question # 8

You have a cluster running with the fair Scheduler enabled. There are currently no jobs running on the cluster, and you submit a job A, so that only job A is running on the cluster. A while later, you submit Job

B.  

now Job A and Job B are running on the cluster at the same time. How will the Fair Scheduler handle these two jobs? (Choose two)

Options:

A.  

When Job B gets submitted, it will get assigned tasks, while job A continues to run with fewer tasks.

B.  

When Job B gets submitted, Job A has to finish first, before job B can gets scheduled.

C.  

When Job A gets submitted, it doesn’t consumes all the task slots.

D.  

When Job A gets submitted, it consumes all the task slots.

Discussion 0
Question # 9

Cluster Summary:

45 files and directories, 12 blocks = 57 total. Heap size is 15.31 MB/193.38MB(7%)

Question # 9

Refer to the above screenshot.

You configure a Hadoop cluster with seven DataNodes and on of your monitoring UIs displays the details shown in the exhibit.

What does the this tell you?

Options:

A.  

The DataNode JVM on one host is not active

B.  

Because your under-replicated blocks count matches the Live Nodes, one node is dead, and your DFS Used % equals 0%, you can’t be certain that your cluster has all the data you’ve written it.

C.  

Your cluster has lost all HDFS data which had bocks stored on the dead DatNode

D.  

The HDFS cluster is in safe mode

Discussion 0
Get CCA-500 dumps and pass your exam in 24 hours!

Free Exams Sample Questions