What is Apache Spark?

Pangkalahatang-ideya ng: Apache spark is a high performance general engine used to process large scale data. It is an open source framework used for cluster computing. Aim of this framework is to make the data analytic faster – both in terms of development and execution. In this document, I will talk about Apache Spark and discuss the various aspects of this framework.

Panimula: Apache spark is an open source framework for cluster computing. It is built on top of the Hadoop Distributed File System (HDFS). It doesn’t use the two stage map reduce paradigm. But at the same time it promises up to 100 times faster performance for certain applications. Spark also provides the initial leads for cluster computing within the memory. This enables the application programs to load the data into the memory of a cluster so that it can be queried repeatedly. This in-memory computation makes Spark one of the most important component in big data computation world.

Features: Now let us discuss the features in brief. Apache Spark comes up with the following features:

  • APIs based on Java, Scala and Python.
  • Scalability ranging from 80 to 100 nodes.
  • Ability to cache dataset within the memory for interactive data set. E.g. extract a working set, cache it and query it repeatedly.
  • Efficient library for stream processing.
  • Efficient library for machine learning at graph processing.

While talking about Spark in the context of data science it is noticed that spark has the ability to maintain the resident data in the memory. This approach enhances the performance as compared to map reduce. Looking from the top, spark contains a driver program which runs the main method of the client and executes various operations in parallel mode on a clustered environment.

Spark provides resilient distributed dataset (RDD) which is a collection of elements which are distributed across the different nodes of cluster, so that they can be executed in parallel. Spark has the ability to store an RDD in the memory, thus allowing it to be reused efficiently across parallel execution. RDDs can also automatically recover in case of the node failure.

Spark also provides shared variables which are used in parallel operations. When spark runs in parallel as a set of tasks on different nodes, it transfers a copy of each variable to every task. These variables are also shared across different tasks. In spark we have two types of shared variables –

  • broadcast variables – used to cache a value in memory
  • accumulators – used in case of counters and sums.

Configuring Spark:

Spark provides three main areas for configuration:

  • Spark properties – This control most of the application and can be set by either using the SparkConf object or with the help of Java system properties.
  • Environment Variables – These can be used to configure machine based settings e.g. ip address with the help of conf/spark-env.sh script on every single node.
  • Logging – This can be configured using the standard log4j properties.

Spark Properties: Spark properties control most of the application settings and should be configured separately for separate applications. These properties can be set using the SparkConf object and is passed to the SparkContext. SparkConf allows us to configure most of the common properties to initialize. Using the set () method of SparkConf class we can also set key value pairs. A sample code using the set () method is shown below –

Listing 1: Sample showing the Set method

val conf = new SparkConf ()

. setMaster( “aws” )

. setAppName( “My Sample SPARK application” )

. set( “spark.executor.memory” , “1g” )

val sc = new SparkContext (conf)

Some of the common properties are –
• spark.executor.memory – This indicates the amount of memory to be used per executor. •
• spark.serializer – Class used to serialize objects which will be sent over the network. Since the default java serialization is quite slow, it is recommended to use the org.apache.spark.serializer.JavaSerializer class to get a better performance.
• spark.kryo.registrator – Class used to register the custom classes if we use the Kyro serialization
• spark.local.dir – locations which spark uses as scratch space to store the map output files.
• spark.cores.max – Used in standalone mode to specify the maximum amount of CPU cores to request.

Environment Variables: Some of the spark settings can be configured using the environment variables which are defined in the conf/spark-env.sh script file. These are machine specific settings e.g. library search path, java path etc. Some of the commonly used environment variables are –

  • JAVA_HOME – Location where JAVA is installed on your system.
  • PYSPARK_PYTHON – The python library used for PYSPARK.
  • SPARK_LOCAL_IP – IP address of the machine which is to be bound.
  • SPARK_CLASSPATH – Used to add the libraries which are used at runtime to execute.
  • SPARK_JAVA_OPTS – Used to add the JVM options

Logging: Spark uses the standard Log4j API for logging which can be configured using the log4j. properties file.

Initializing Spark:

To start with a spark program, the first thing is to create a JavaSparkContext object, which tells spark to access the cluster. To create a spark context we first create spark conf object as shown below:

Listing 2: Initializing the spark context object

SparkConfconfig=newSparkConf().setAppName(applicationName).setMaster(master);

JavaSparkContextconext=newJavaSparkContext(config);

The parameter applicationName is the name of our application which is shown on the cluster UI. The parameter master is the cluster URL or a local string used to run in local mode.

Resilient Distributed Datasets (RDDs):

Spark is based on the concept of resilient distributed dataset or RDD. RDD is a fault-tolerant collection of elements which can be operated in parallel. RDD can be created using either of the following two manners:

  • By Parallelizing an existing collection – Parallelized collections are created by calling the parallelize method of the JavaSparkContext class in the driver program. Elements of the collection are copied from an existing collection which can be operated in parallel.
  • By Referencing the dataset on an external storage system – Spark has the ability to create distributed datasets from any Hadoop supported storage space e.g. HDFS, Cassendra, Hbase etc.

RDD Operations:

RDD supports two types of operations –

  • Transformations – Used to create new datasets from an existing one.
  • Actions – This returns a value to the driver program after executing the code on the dataset.

In RDD the transformations are lazy. Transformations do not compute their results right away. Rather they just remember the transformations which are applied to the base datasets.

Summary: So in the above discussion I have explained different aspects of Apache SPARK framework and its implementation. The performance of SPARK over normal MapReduce job is also one of the most important aspects we should understand clearly.

Let us conclude our discussion in the following bullets:

  • Spark is a framework presented by Apache which delivers high performance search engine used to process large scale of data.
  • Developed on top of HDFS, but it does not use the map reduce paradigm.
  • Spark promises 100 times faster performance than Hadoop.
  • Spark best performs on clusters.
  • Spark can scale up to a range of 80 to 100 nodes.
  • Spark has the ability to cache the datasets
  • Spark can be configured with the help of a properties file and some environment variables.
  • Spark is based on resilient distributed datasets (RDD) which is a collection of fault tolerant objects.
Tagged on:
============================================= ============================================== Buy best TechAlpine Books on Amazon
============================================== ---------------------------------------------------------------- electrician ct chestnutelectric
error

Enjoy this blog? Please spread the word :)

Follow by Email
LinkedIn
LinkedIn
Share