TechAlpine – The Technology world

TechAlpine Software Solutions – Desire to excel

SERVICES : APP DEVELOPMENT | WEB DESIGN | CONSULTANCY | TECHNICAL WRITING | ONLINE TRAINING | QA & TESTING | MOBILE DEVELOPMENT |

Please feel free to contact us @ techalpineit@gmail.com

Oracle Drops Collection Literals in JDK 8

Posted on | April 12, 2014 | No Comments

In a posting on the OpenJDK JEP 186 Oracle’s Brian Goetz informs that Oracle will not be pursuing collection literals as a language feature in JDK8.

A collection literal is a syntactic expression form that evaluates to an aggregate type as an array, List or Map. Project Coin proposed collection literals, which also complements the library additions in Java SE8. The assumption was that collection literals would increase productivity, code readability, and code safety.

As an alternative Oracle suggests a library-based proposal based on the concept of static methods on interfaces. The Implementation would ideally be via new dedicated immutable classes.

Following are the major points behind this library-based approach.

  • The basic solution of this feature works only for Sets, Lists and Maps so it is not very satisfying or popular. The advanced solution to cover an extensible set of other collection types is open-ended, messy, and virtually guaranteed to way overrun its design budget.
  • The library-based changes would remove much of the requirement for the “collection literals” change discussed in Project Coin.
  • The library-based approach gives X% of the benefit for 1% of the cost, where X >> 1.
  • The value types are coming and the behavior of this new feature (collection literals) with the value types is not known. It is better not to try collection literal before the value types.
  • It is better off focusing Oracle’s language-design bandwidth on addressing foundational issues underlying a library-based version. This includes more efficient varargs, array constants in the constant pool, immutable arrays, and support for caching (and reclaiming under pressure) intermediate immutable results.

According to Oracle’s Brian Goetz, the real pain is in Maps not Lists, Sets or Arrays. The library-based solutions are more acceptable for Lists, Sets and Arrays. But this approach still lacks a reasonable way to describe pair literals as Maps. The Static methods in an interface make the library-based solution more practical. The value types make library-based solutions for Map far more practical too. The proof of concept patch for the library-based solution is also available.

What are the Hadoop MapReduce concepts?

Posted on | April 9, 2014 | No Comments

What do you mean by Map-Reduce programming?

MapReduce is a programming model designed for processing large volumes of data in parallel by dividing the work into a set of independent tasks.

The MapReduce programming model is inspired by functional languages and targets data-intensive computations. The input data format is application-specific, and is specified by the user. The output is a set of <key,value> pairs. The user expresses an algorithm using two functions, Map and Reduce. The Map function is applied on the input data and produces a list of intermediate <key,value> pairs. The Reduce function is applied to all intermediate pairs with the same key. It typically performs some kind of merging operation and produces zero or more output pairs. Finally, the output pairs are sorted by their key value. In the simplest form of MapReduce programs, the programmer provides just the Map function. All other functionality, including the grouping of the intermediate pairs which have the same key and the final sorting, is provided by the runtime.

Phases of MapReduce model

The top level unit of work in MapReduce is a job. A job usually has a map and a reduce phase, though the reduce phase can be omitted. For example, consider a MapReduce job that counts the number of times each word is used across a set of documents. The map phase counts the words in each document, then the reduce phase aggregates the per-document data into word counts spanning the entire collection.

During the map phase, the input data is divided into input splits for analysis by map tasks running in parallel across the Hadoop cluster. By default, the MapReduce framework gets input data from the Hadoop Distributed File System (HDFS).

The reduce phase uses results from map tasks as input to a set of parallel reduce tasks. The reduce tasks consolidate the data into final results. By default, the MapReduce framework stores results in HDFS.

Although the reduce phase depends on output from the map phase, map and reduce processing is not necessarily sequential. That is, reduce tasks can begin as soon as any map task completes. It is not necessary for all map tasks to complete before any reduce task can begin.

MapReduce operates on key-value pairs. Conceptually, a MapReduce job takes a set of input key-value pairs and produces a set of output key-value pairs by passing the data through map and reduces functions. The map tasks produce an intermediate set of key-value pairs that the reduce tasks uses as input.

The keys in the map output pairs need not be unique. Between the map processing and the reduce processing, a shuffle step sorts all map output values with the same key into a single reduce input (key, value-list) pair, where the ‘value’ is a list of all values sharing the same key. Thus, the input to a reduce task is actually a set of (key, value-list) pairs.

Though each set of key-value pairs is homogeneous, the key-value pairs in each step need not have the same type. For example, the key-value pairs in the input set (KV1) can be (string, string) pairs, with the map phase producing (string, integer) pairs as intermediate results (KV2), and the reduce phase producing (integer, string) pairs for the final results (KV3).

The keys in the map output pairs need not be unique. Between the map processing and the reduce processing, a shuffle step sorts all map output values with the same key into a single reduce input (key, value-list) pair, where the ‘value’ is a list of all values sharing the same key. Thus, the input to a reduce task is actually a set of (key, value-list) pairs.

Example demonstrating MapReduce concepts

The example demonstrates basic MapReduce concept by calculating the number of occurrence of each word in a set of text files.

The MapReduce input data is divided into input splits, and the splits are further divided into input key-value pairs. In this example, the input data set is the two documents, document1 and document2. The InputFormat subclass divides the data set into one split per document, for a total of 2 splits:

Note: The MapReduce framework divides the input data set into chunks called splits using the org.apache.hadoop.mapreduce.InputFormat subclass supplied in the job configuration. Splits are created by the local Job Client and included in the job information made available to the Job Tracker. The JobTracker creates a map task for each split. Each map task uses a RecordReader provided by the InputFormat subclass to transform the split into input key-value pairs.

A (line number, text) key-value pair is generated for each line in an input document. The map function discards the line number and produces a per-line (word, count) pair for each word in the input line. The reduce phase produces (word, count) pairs representing aggregated word counts across all the input documents. Given the input data shown the map-reduce progression for the example job is:

The output from the map phase contains multiple key-value pairs with the same key: The ‘oats’ and ‘eat’ keys appear twice. Recall that the MapReduce framework consolidates all values with the same key before entering the reduce phase, so the input to reduce is actually (key, values) pairs. Therefore, the full progression from map output, through reduce, to final results is shown above.

MapReduce Job Life Cycle

Following is the life cycle of a typical MapReduce job and the roles of the primary actors.The full life cycle are more complex so here we will concentrate on the primary components.

The Hadoop configuration can be done in different ways but the basic configuration consists of the following.

  • Single master node running Job Tracker
  • Multiple worker nodes running Task Tracker

Following are the life cycle components of MapReduce job.

  • Local Job client: The local job Client prepares the job for submission and hands it off to the Job Tracker.
  • Job Tracker: The Job Tracker schedules the job and distributes the map work among the Task Trackers for parallel processing.
  • Task Tracker: Each Task Tracker spawns a Map Task. The Job Tracker receives progress information from the Task Trackers.

Once map results are available, the Job Tracker distributes the reduce work among the Task Trackers for parallel processing.

Each Task Tracker spawns a Reduce Task to perform the work. The Job Tracker receives progress information from the Task Trackers.

All map tasks do not have to complete before reduce tasks begin running. Reduce tasks can begin as soon as map tasks begin completing. Thus, the map and reduce steps often overlap.

Functionality of different components in MapReduce job

Job Client: Job client performs the following tasks

  • Validates the job configuration
  • Generates the input splits. This is basically splitting the input job into chunks
  • Copies the job resources (configuration, job JAR file, input splits) to a shared location, such as an HDFS directory, where it is accessible to the Job Tracker and Task Trackers
  • Submits the job to the Job Tracker

Job Tracker: Job Tracker performs the following tasks

  • Fetches input splits from the shared location where the Job Client placed the information
  • Creates a map task for each split
  • Assigns each map task to a Task Tracker (worker node)

After the map task is complete, Job Tracker does the following tasks

  • Creates reduce tasks up to the maximum enabled by the job configuration.
  • Assigns each map result partition to a reduce task.
  • Assigns each reduce task to a Task Tracker.

Task Tracker: A Task Tracker manages the tasks of one worker node and reports status to the Job Tracker.

Task Tracker does the following tasks when map or reduce task is assigned to it

  • Fetches job resources locally
  • Spawns a child JVM on the worker node to execute the map or reduce task
  • Reports status to the Job Tracker

Debugging Map Reduce

Hadoop keeps logs of important events during program execution. By default, these are stored in the logs/ subdirectory of the hadoop-version/ directory where you run Hadoop from. Log files are named hadoop-username-service-hostname.log. The most recent data is in the .log file; older logs have their date appended to them. The username in the log filename refers to the username under which Hadoop was started — this is not necessarily the same username you are using to run programs. The service name refers to which of the several Hadoop programs are writing the log; these can be jobtracker, namenode, datanode, secondarynamenode, or tasktracker. All of these are important for debugging a whole Hadoop installation. But for individual programs, the tasktracker logs will be the most relevant. Any exceptions thrown by your program will be recorded in the tasktracker logs.

The log directory will also have a subdirectory called userlogs. Here there is another subdirectory for every task run. Each task records its stdout and stderr to two files in this directory. Note that on a multi-node Hadoop cluster, these logs are not centrally aggregated — you should check each TaskNode’s logs/userlogs/ directory for their output.

Debugging in the distributed setting is complicated and requires logging into several machines to access log data. If possible, programs should be unit tested by running Hadoop locally. The default configuration deployed by Hadoop runs in “single instance” mode, where the entire MapReduce program is run in the same instance of Java as called JobClient.runJob(). Using a debugger like Eclipse, you can then set breakpoints inside the map() or reduce() methods to discover your bugs.

Is reduce job mandatory?

Some jobs can complete all their work during the map phase. SO the Job can be map only job. To stop a job after the map completes, set the number of reduce tasks to zero.

Conclusion

This module described the MapReduce execution platform at the heart of the Hadoop system. By using MapReduce, a high degree of parallelism can be achieved by applications. The MapReduce framework provides a high degree of fault tolerance for applications running on it by limiting the communication which can occur between nodes.

How to Create 2D Animation Using Flash?

Posted on | January 11, 2014 | No Comments

Overview: In this piece of writing I will portray one on the main quality of Flash, which is identified as ’2D Animation’. In this session I will cover how to generate a new clean movie file, and the tools and steps implicated in creation your first simple animation by ‘Motion Tweening’.

Introduction: 2D animation statistics are shaped and/or shortened on the computer using 2D bitmap graphics or shaped and shortened using 2D vector graphics. This includes programmed mechanized versions of conventional animation techniques such as add something in piece of writing gradually change, onion skin and add something in piece of writing decompose scoping. 2D animation has many applications, with analogue computer cartoon, Flash cartoon and PowerPoint cartoon Cinema graphs are still photographs in the appearance of an animated GIF file of which part is animated.

Image1: Showing editor

Opening in New File in Flash:                                      

To begin with, I will open Flash by clicking on the Start icon in the listing that I’ve installed it within. If this is your primary instance loading the application, it will ask me to choose moreover Designer or Developer view, and a declaration that matches my monitor settings (this isn’t applicable to versions of Flash prior to MX). For our purposes in this lesson I’ve selected Designer view, and my screen is set to a declaration of 1024×768. (I can forever change this later by clicking the “Window” option in the top toolbar and then selecting “section set”.) On one occasion I have completed so as to, I’ll observe an empty dismissal surrounded by instrument set, akin to the casement on top of.

 Adjusting Document Settings:

Image2: Showing document settings

The canvas is set to a evasion size; because I rather to work at standard sizes and aspects ratios for web production, I am departing to modify my canvas size from the default of 550 pixels wide x 400 pixels high to a smaller size of 320 x 240. I’m departing to do this by clicking the button on the Properties tab–the panel just under the canvas and functioning area–that displays the canvas size; this will open a new pop-up window containing the document’s properties. For right now you only need to be anxious about three things: the text amount, the backdrop colour, and the border speed.

I am going to regulate my size to 320 x 240 by myself entering the principles in the seats provide, use the colour strike to choose a glow white backdrop, and go away the frame rate set at 24 fps (FPS). This is the defaulting border speed for mesh manufacture; for television and other media production, it is able to go as high as 30 frames per second. 12 is all that i actually require for Flash animation for the web; it still allows a reliable pour of movement, but avoid raise my folder capacity by addition additional border.

Opening to the main Toolbar and sketch a figure:

Image3: Showing tool bars

This is situated to the right side of the window, I will see the Tools panel, with icons charitable I admittance to the main functions I’ll need to start drawing and animating in Flash CS6. To start with, I can able to click the icon that looks like a hand, in the right-hand column about halfway down under the “View” heading. This is the (appropriately and humorously named) Hand Tool; if I click and drag my mouse in my work area with this tool selected, I can exactly drag my canvas about until it’s situated where you satisfy.

Just the once I have got my work space established happily, let’s try drawing something. Beneath the top section of the equipment board snap on the push button depict a pale ring outline in black; this is the Oval Tool. I can apply this to draw any circular shape. Under the rest of the tools are two colour-picker windows: one for the “Stroke” (represented by a graphic of a pencil) and the other for the “Fill” (represented by a graphic of a tint pail). The caress is the colour of the shape draw round, as the fill is the colour within the shape. I’m going away to set my stroke to black and my fill to a dark red, and then draw a small circle in the higher left corner of my canvas by clicking and then dragging awaiting the shape is at the needed size earlier than releasing.

Viewing the Timeline:

Image4: Showing time lines

Currently earlier than we jump right into animating, let’s take a view at one more area of the window: the timeline, set over the work area (depicted in the image under). The timeline is separated into two columns: the Layers column, and the actual timeline itself, separated into individual frames; the descending red line marks your current location in the frameset. The frameset is one of the mainly significant equipment in Flash; it allows me to keep track of your various objects and shapes and which layer they’re on, as well as custody pathway of my animation key frames and their timing and assignment. Most of the work of my animations will be done here.

As you can see, accurate at the present we have one layer (containing the circle drawn in the earlier step) with one timeline related by means of that thread. The first border of the frameset is grayed elsewhere by means of a little black spot mark; what that means is that this frame is a key frame, shaped mechanically when I drew the circle on top of that border. In command in the direction of breathing by means of tweening, I have to explain solution frame; absent them Flash has no commencement or finish points to animate in between.

Converting Your Shape to a Symbol:

 

Image5: Showing conversion

Earlier than we go any additional, we’ll require to create some changes to our circle. Why do we require exchanging the shape we’ve by now drawn to a little else? Because Flash doesn’t tween usual shapes; if you try to be relevant a tween even as the substance are still shapes, it won’t work, so we exchange them to symbols; it won’t take extra than a moment.

Just right-click on my circle and select “Convert to Symbol”. A popup window will show, asking what I want to name the symbol and what Behaviour to be valid to it; type in a name, and choose “Graphic”; the other options will be enclosed later, in more highly developed lessons. Click “OK”; you’ll see the transform in your object by the blue outline nearby it. This symbol will now come into view in the records, which I be able to sight by using function key F11 or click on casement records; at what time code or substance be listed in the records, they be able to live reuse at some direct by just tardy them on the hessian.

Now, let’s get back to animating. On the next page, we’ll produce another key frame so that we can tell Flash to animate in between them.

Creating a New Key Frame:

Image6: Showing key frame

Let’s seem at the timeline again. If I’ll keep in mind, we set the frame rate for this document to 12 frames per second; that earnings that twelve of the blocks in the timeline will create up one second of animation. I desire my circle to move over one second, so I’m leaving to click on the twelfth block; the timeline is marked in increments of five, so just count two after the ten indicators. If I right-click on the border, I’ll observe two master frame options: put in master frame and put in Blank master frame. I’ll wish to press on Insert master frame; this automatically copies the entire thing on the previous key frame (together with our circle) to the new keyframe.

Following clicking “Insert Keyframe”, you be able to see that now the 12th frame is grayed out and noticeable with a dot, just like the first; the gap in between them is also grayed, with the 11th frame striking by a little white four-sided figure. This signifies that readily available is no movement or “tween” situated on the frame in flanked by; flash has mechanically placed a hold on these frames so that they endlessly fill with a replicate of the key frame.

Moving Symbol on a New Frame:

Now we’re departing to go back to our workspace; creating sure that I have still on frame 12 (the

Image7: Showing new frame

red timeline indicator have to be over that frame), use the Arrow tool to select the circle that I drew previous to; I will be talented to tell that it’s selected by the blue outline nearby it again. Click and drag on the circle to move about it wherever on your canvas; I’m leaving to move mine to the lower right-hand corner of my active space. (If not you want to animate your shape moving off-screen, don’t move it into the gray area nearby the canvas; objects in the gray area do not demonstrate up on the final animation.)

If I go back to my timeline and click on frame 1, pressing the “Enter” key will play all of my animation; though, it doesn’t look quite right, does it? The circle leftover in the unique position for the first eleven frames, earlier to suddenly snapping to the original site. In the after that footstep I’ll get be bothered of so as to by apply the movement tween.