Hadoop Distributed File System (HDFS)



21. Presentation?  

Visualization Tool Skills

You may be able to mine and model the gathered data, but are you able to visualize it?

If you want to be a successful data scientist, you should be able to work with some data visualization tools to represent data analyses visually. Some of these include R, Flare, HighCharts, AmCharts, D3.js, Processing, and Google Visualization API etc.

But this is not the end! If you are really keen to become a data scientist, you should also have the following skills:

Communication Skills: Statistics and Excel are the tricky ones to deal with. Data Scientists should be able to present the data in a way that it communicates the results to the business users.

Business Skills: Data scientists will have to play multiple roles. They would need to communicate with diverse people in the organization. Therefore, having strong business skills that include communication, planning, organizing and managing will be of great help. This includes understanding business and application requirements and interpreting the information accordingly. Also, he should have an overall understanding of the key challenges in the industry and should be aware of the financial ratios for better decision making. Bottom line, a data scientist as to think ‘Business’ as well.

Problem solving skills: This seems obvious as data science is all about problem solving. An efficient data scientist must take time and look into the problem deeply and come up with a feasible solution to suit the user.

Prediction Skills: A data scientist should also be an efficient predictor. He should have broad knowledge of algorithms to select the right one to properly fit the data model. This involves certain amount of creativity to use and represent the data wisely.

Hacking: I know it sounds scary, but different hacking skills like manipulating text files at the command line, understanding vectorized operations and algorithmic thinking will make you a better data scientist.

Looking at the above skill sets it is clear that being a Data Scientist is not just about knowing everything about data. It is a job profile with an amalgamation of data skills, math skills, business skills and communication skills. With all these skills together, a Data Scientist can be rightfully called as the Rock star of the IT field.



22. List to become an awesome and efficient data scientist?  

We covered the skills that is required to become a data scientist. There is a huge difference to just becoming a data scientist and become an awesome and efficient data scientist. The following skills along with the above mentioned skills, sets you apart from being a normal or even a mediocre data scientist.

Mathematical skills – Calculas, Matrix operations, Numerical optimization, stochastic methods, etc.

Statistic skills – Regression models, tress, classifications, diagnostics, applied Statistics, etc.

Communication – Visualization, presentation and writing.

Database – Besides CouchDB, knowledge in non-traditional databases like MongoDB and Vertica.

Programming languages – Pig, Hive, Java, Python, etc.

Natural language processing and Data Mining.



23. Why do we need Hadoop?  

Everyday a large amount of unstructured data is getting dumped into our machines. The major challenge is not to store large data sets in our systems but to retrieve and analyze the big data in the organizations, that too data present in different machines at different locations. In this situation a necessity for Hadoop arises. Hadoop has the ability to analyze the data present in different machines at different locations very quickly and in a very cost effective way. It uses the concept of MapReduce which enables it to divide the query into small parts and process them in parallel. This is also known as parallel computing. The following link Why Hadoop gives a detailed explanation about why Hadoop is gaining so much popularity!



24. What are some of the characteristics of Hadoop framework?  

Hadoop framework is written in Java. It is designed to solve problems that involve analyzing large data (e.g. petabytes). The programming model is based on Google’s MapReduce. The infrastructure is based on Google’s Big Data and Distributed File System. Hadoop handles large files/data throughput and supports data intensive distributed applications. Hadoop is scalable as more nodes can be easily added to it.




25. Give examples of some companies that are using Hadoop structure?  

A lot of companies are using the Hadoop structure such as Cloudera, EMC, MapR, Hortonworks, Amazon, Facebook, eBay, Twitter, Google and so on.



26. What is the basic difference between traditional RDBMS and Hadoop?  

Traditional RDBMS is used for transactional systems to report and archive the data, whereas Hadoop is an approach to store huge amount of data in the distributed file system and process it. RDBMS will be useful when you want to seek one record from Big data, whereas, Hadoop will be useful when you want Big data in one shot and perform analysis on that later.



27. What is structured and unstructured data?  

Structured data is the data that is easily identifiable as it is organized in a structure. The most common form of structured data is a database where specific information is stored in tables, that is, rows and columns. Unstructured data refers to any data that cannot be identified easily. It could be in the form of images, videos, documents, email, logs and random text. It is not in the form of rows and columns.



28. What are the core components of Hadoop?  

Core components of Hadoop are HDFS and MapReduce. HDFS is basically used to store large data sets and MapReduce is used to process such large data sets.



29. What are the key features of HDFS?  

HDFS is highly fault-tolerant, with high throughput, suitable for applications with large data sets, streaming access to file system data and can be built out of commodity hardware.



30. What is Fault Tolerance?  

Suppose you have a file stored in a system, and due to some technical problem that file gets destroyed. Then there is no chance of getting the data back present in that file. To avoid such situations, Hadoop has introduced the feature of fault tolerance in HDFS. In Hadoop, when we store a file, it automatically gets replicated at two other locations also. So even if one or two of the systems collapse, the file is still available on the third system.



31. Replication causes data redundancy ,then why is it pursued in HDFS?  

HDFS works with commodity hardware (systems with average configurations) that has high chances of getting crashed any time. Thus, to make the entire system highly fault-tolerant, HDFS replicates and stores data in different places. Any data on HDFS gets stored at atleast 3 different locations. So, even if one of them is corrupted and the other is unavailable for some time for any reason, then data can be accessed from the third one. Hence, there is no chance of losing the data. This replication factor helps us to attain the feature of Hadoop called Fault Tolerant.



32. Since the data is replicated thrice in HDFS, does it mean that any calculation done on one node will also be replicated on the other two?  

Since there are 3 nodes, when we send the MapReduce programs, calculations will be done only on the original data. The master node will know which node exactly has that particular data. In case, if one of the nodes is not responding, it is assumed to be failed. Only then, the required calculation will be done on the second replica.



33. What is throughput?How does HDFS get a good throughput?  

Throughput is the amount of work done in a unit time. It describes how fast the data is getting accessed from the system and it is usually used to measure performance of the system. In HDFS, when we want to perform a task or an action, then the work is divided and shared among different systems. So all the systems will be executing the tasks assigned to them independently and in parallel. So the work will be completed in a very short period of time. In this way, the HDFS gives good throughput. By reading data in parallel, we decrease the actual time to read data tremendously.



34. What is Large Data Sets?  

It is assumed that HDFS always needs to work with large data sets. It will be an underplay if HDFS is deployed to process several small data sets ranging in some megabytes or even a few gigabytes. The architecture of HDFS is designed in such a way that it is best fit to store and retrieve huge amount of data. What is required is high cumulative data bandwidth and the scalability feature to spread out from a single node cluster to a hundred or a thousand-node cluster. The acid test is that HDFS should be able to manage tens of millions of files in a single occurrence.



35. What is Write Once, Read Many Model?  

HDFS follows the write-once, read-many approach for its files and applications. It assumes that a file in HDFS once written will not be modified, though it can be access ‘n’ number of times (though future versions of Hadoop may support this feature too)! At present, in HDFS strictly has one writer at any time. This assumption enables high throughput data access and also simplifies data coherency issues. A web crawler or a MapReduce application is best suited for HDFS.



36. What is Streaming Data Access?  

As HDFS works on the principle of ‘Write Once, Read Many‘, the feature of streaming data access is extremely important in HDFS. As HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access. HDFS focuses not so much on storing the data but how to retrieve it at the fastest possible speed, especially while analyzing logs. In HDFS, reading the complete data is more important than the time taken to fetch a single record from the data. HDFS overlooks a few POSIX requirements in order to implement streaming data access.



37. What is Commodity Hardware?  

HDFS (Hadoop Distributed File System) assumes that the cluster(s) will run on common hardware, that is, non-expensive, ordinary machines rather than high-availability systems. A great feature of Hadoop is that it can be installed in any average commodity hardware. We don’t need super computers or high-end hardware to work on Hadoop. This leads to an overall cost reduction to a great extent.



38. What is Data Replication and Fault Tolerance?  

HDFS works on the assumption that hardware is bound to fail at some point of time or the other. This disrupts the smooth and quick processing of large volumes of data. To overcome this obstacle, in HDFS, the files are divided into large blocks of data and each block is stored on three nodes: two on the same rack and one on a different rack for fault tolerance. A block is the amount of data stored on every data node. Though the default block size is 64MB and the replication factor is three, these are configurable per file. This redundancy enables robustness, fault detection, quick recovery, scalability, eliminating the need of RAID storage on hosts and merits of data locality.



39. What is High Throughput?  

Throughput is the amount of work done in a unit time. It describes how fast the data is getting accessed from the system and it is usually used to measure performance of the system. In Hadoop HDFS, when we want to perform a task or an action, then the work is divided and shared among different systems. So, all the systems will be executing the tasks assigned to them independently and in parallel. So the work will be completed in a very short period of time. In this way, the Apache HDFS gives good throughput. By reading data in parallel, we decrease the actual time to read data tremendously.



40. Is Moving Computation is better than Moving Data?  

Hadoop HDFS works on the principle that if a computation is done by an application near the data it operates on, it is much more efficient than done far of, particularly when there are large data sets. The major advantage is reduction in the network congestion and increased overall throughput of the system. The assumption is that it is often better to locate the computation closer to where the data is located rather than moving the data to the application space. To facilitate this, Apache HDFS provides interfaces for applications to relocate themselves nearer to where the data is located.



.Net Interview Question

PHP Interview Question

Java Interview Question

AngularJS Interview Questions