Performance Architect Certification Exam Guide
**Question 1.** Which component in a Hitachi VSP 5000 series is responsible for managing
host I/O commands before they reach the cache?
A) Front‑end Director (FED)
B) Back‑end Director (BED)
C) Virtual Storage Director (VSD)
D) Cache Logical Partition (CLPR)
Answer: A
Explanation: The FED receives host commands, performs protocol translation, and forwards
them to the cache subsystem.
**Question 2.** In a RAID 1+0 configuration, what is the write penalty for a single block write
operation?
A) 1
B) 2
C) 3
D) 4
Answer: B
Explanation: RAID 1+0 mirrors data (write to two disks) and stripes across pairs, resulting in two
write operations per logical write.
**Question 3.** Which RAID level provides the highest read throughput for large sequential
workloads while tolerating two simultaneous disk failures?
A) RAID 5
B) RAID 6
C) RAID 1+0
D) RAID 10+1
Answer: B
, [HCE3700] HCE 3700 Hitachi Vantara Certified Epert
Performance Architect Certification Exam Guide
Explanation: RAID 6 adds two parity blocks, allowing two disk failures, and its striping gives high
sequential read performance.
**Question 4.** A Cache Logical Partition (CLPR) is primarily used to:
A) Increase total cache size by aggregating multiple cache modules.
B) Isolate workloads to guarantee QoS.
C) Store metadata for thin provisioning.
D) Accelerate backup traffic only.
Answer: B
Explanation: CLPRs allow administrators to allocate dedicated cache resources to specific
workloads, ensuring performance isolation.
**Question 5.** When cache‑write‑pending (CWP) reaches 80 % of its limit, what is the most
likely impact on host response time?
A) Response time improves due to write coalescing.
B) No impact; CWP only affects cache capacity.
C) Response time degrades because writes are forced to media.
D) Cache is automatically expanded.
Answer: C
Explanation: High CWP forces pending writes to be flushed to storage media, increasing latency
for subsequent host I/O.
**Question 6.** Which storage media type typically offers the lowest latency for random 4 KB
reads?
A) 10 K RPM SAS HDD
B) 15 K RPM SAS HDD
C) SATA SSD
, [HCE3700] HCE 3700 Hitachi Vantara Certified Epert
Performance Architect Certification Exam Guide
D) NVMe SSD
Answer: D
Explanation: NVMe SSDs communicate over PCIe, delivering sub‑100 µs latency, far lower than
SAS HDDs or SATA SSDs.
**Question 7.** In a Fibre Channel fabric, what does “buffer‑to‑buffer credit” (BB Credit)
control?
A) Number of concurrent LUNs per port.
B) Amount of data that can be outstanding without acknowledgment.
C) Maximum queue depth on the host.
D) Number of active FC switches.
Answer: B
Explanation: BB Credits represent the number of frames a sender can transmit before receiving
acknowledgment, preventing buffer overflow.
**Question 8.** Over‑subscribing a FC fabric beyond its designed bandwidth primarily leads to:
A) Increased cache hit ratio.
B) Higher I/O latency due to queuing.
C) Reduced RAID rebuild time.
D) Automatic load‑balancing by GLM.
Answer: B
Explanation: Over‑subscription creates contention, causing I/O queues to build up and latency
to rise.
**Question 9.** Which iSCSI parameter directly influences the number of concurrent I/O
operations a host can issue?
A) Maximum Transmission Unit (MTU)
, [HCE3700] HCE 3700 Hitachi Vantara Certified Epert
Performance Architect Certification Exam Guide
B) Initiator/Target Queue Depth (IQD/TQD)
C) Authentication method
D) VLAN ID
Answer: B
Explanation: IQD/TQD define the number of outstanding SCSI commands, controlling
concurrency.
**Question 10.** The Global Link Manager (GLM) in a VSP environment is used to:
A) Encrypt data at rest.
B) Balance I/O across multiple physical paths.
C) Manage thin provisioning policies.
D) Perform RAID level migration.
Answer: B
Explanation: GLM monitors path performance and redistributes I/O to avoid hot links and
improve throughput.
**Question 11.** When configuring LUNs, what is a “hot port” scenario?
A) A port that exceeds 80 % utilization, causing latency spikes.
B) A port that is physically hotter than others due to hardware failure.
C) A port with the highest number of CLPRs attached.
D) A port configured with the maximum queue depth.
Answer: A
Explanation: “Hot port” refers to a port that becomes a bottleneck because too many LUNs or
I/O are mapped to it.
**Question 12.** In workload profiling, a “small‑block random” I/O pattern typically stresses
which subsystem the most?