Big Data Challenges

Big Data Challenges Overview

  1. Big Data Business Challenges
  • Evaluating volumes of structured, unstructured and semi structured content.
  • Effective management of data storage and retrieval.
  • Old stored historical data analysis and prediction features based on the analysis results.
  • Storage and Management of content generated through various social media networks like, Facebook, Twitter, YouTube, Review Website, Consumer Forum etc.
  • Understand the customers sentiments through social media channels.

Big-Data-Challenges

2. Big Data Technical Challenges

  • Processes different kind of high volume of content and data in real-time from different data sources.
  • Classification of data based on the context.
  • Evaluating the data and analysis results in real-time.
  • Better visualization of content, data analysis results.
  • Options to customers to create domain specific applications.Easy integration with existing data base infrastructure.

 

Big Data 2012 – Big Data Analytics

Big Data 2012 – Big Data Analytic

On a everyday, the corporate generates regarding fifteen petabytes of knowledge regarding their business and money operations, yet as customers and suppliers. a powerful volume of information additionally circulates in social media and mobile devices. Another volume, as spectacular because it is generated by the increasing range of sensors and alternative embedded devices within the physical world, like roads, vehicles, aircraft, robotic machines, among others. One second of high-definition video generates a pair of,000 times additional bytes than a page of text. Capture, handle and analyze this large volume of information could be a massive challenge.

So does one get a difficulty that begins to awaken attention: the supposed massive knowledge. The term refers to databases of size considerably larger than those sometimes met. additionally, this technologies don’t show terribly appropriate for handling them. after all that’s a really subjective definition and mobile, for a definite size thought-about massive will become little during a few years. Today, our backup disks reception operating with T volume. giant databases area unit already within the scale of petabytes.

Analytically treat these knowledge will generate giant edges for society and for business. Recently, the McKinsey world Institute printed a really fascinating report on the economic potential of victimization massive knowledge, known as “Big Data: consecutive frontier for innovation, competition and productivity” which may be accessed here.

The Big knowledge already spans all sectors of the economy. One study showed that in 2009 every yankee company with over thousand workers keep, on average, over two hundred terabytes of information. And in some sectors, the common volume reached a computer memory unit.

Using massive knowledge is getting down to show itself as a differentiating consider the business situation. Some cases cited within the McKinsey report shows that some firms had substantial competitive benefits exploring Associate in Nursingalytically and during a timely manner an huge volume of information. the massive knowledge works 2 keywords: one volume is (are databases of enormous volumes) and also the alternative is speed (handling and analytically treatment should be done terribly quickly. In some cases, even in real time). this is often thanks to the scope of information which will be handled. a conventional knowledge warehouse collects knowledge from transcriptional systems like ERP. These systems record the operations meted out by the businesses as a purchase, for instance. however they are doing not record data regarding transactions that didn’t occur, though somehow be mirrored in discussions regarding the corporate and its product in social media. the corporate may also record varied data with the digitisation of conversations control by customers with decision centers and also the motion photos recorded on video in stores. This data typically unstructured, area unit already obtainable, which the idea of massive knowledge will is integrate them so as to come up with a additional comprehensive volume of knowledge, permitting the corporate to more and more create choices supported facts and not simply on sampling and intuition.

Of course there area unit still major challenges ahead. One is that the technology to quickly handle this huge volume of information. There area unit some technologies geared toward treating giant volumes, like Hadoop and systems specific databases like Cassandra – Open supply system used nowadays by Facebook, Twitter and by Reddit, that should upset lots of speed huge volumes of information distributed manner. Its largest operational surroundings browse over one hundred terabytes during a cluster of a hundred and fifty servers. Another fascinating technology is Associate in Nursing appliance geared to handle giant databases like Netezza, recently non inheritable by IBM.

Another technology that has a lot of space to grow within the space of Big knowledge is named stream computing. The recently declared IBM InfoSphere Streams, supported a look project at IBM analysis, known as System S. The paper then the System S may be seen at this link (pdf).

The idea of stream computing brings a replacement paradigm, and it’s fantastic. In ancient data processing model, Associate in Nursing enterprise filters knowledge from its varied systems and, once making an information warehouse, shooting “queries”. In apply, it’s mining on static knowledge, that don’t replicate the instant, however the context of hours, days or perhaps weeks past. With stream computing, this mining is completed in real time. rather than shooting queries on a info static, there’s endless stream of information (streaming data) across a group of queries. we are able to consider varied applications, whether or not in finance, health and even producing. We’ll see this latest example: a project with a corporation developing semiconductor producing monitors in real time the method of detection and classification of faults. With stream computing, failures in factory-made chips area unit being detected in minutes, not hours, or perhaps weeks. The defective wafers may be reprocessed and, additional significantly, you’ll be able to create time period changes in their own producing processes.