First Of All
When user submits Spark Job to YARN Cluster, a node in YARN Cluster will be allocated from YARN Resource Manager to run the ApplicationMaster. While starting the Driver, ApplicationMaster invokes YarnAllocator.allocateResources() to send YARN Container request to YARN Resource Manager. Number of containers and container profile, including CPU / Memory, have been specified in SparkConf.
Allocated Containers
Once YARN Resource Manager offers containers to the requests from YARNAllocator, requests meeting the condition will be removed from the requesting set to YARN Resource Manager. Executors will be launched on those containers in runAllocatedContainers function. Launching Executors is done by several threads from launcherPool, a thread pool, to start executors at the same time. ExecutorRunnable is used to encapsulate details about starting executors, including create NMClient to communicate with YARN Node Manager, start executors. As regular way to start applications running on YARN Container, the command line to run the executor is prepared at first, including the main class for executor - org.apache.spark.executor.CoarseGrainedExecutorBackend, the url to the Spark Job Driver, and heap setting. Then this command line will be executed by YARN NodeManager to start Spark Executors.
Other Issues
YARNAllocator also handles completed containers. Some of them may end successfully, and some of them may fail. All those info will be saved in RemoveExecutor and sent back to Driver.