DB2 management, tutorials, scripts, coding, programming and tips for database administrators
Many big data applications are designed, built and installed without a formal load test. This is unfortunate, as load testing gives the database administrator quite a lot of valuable information. It may make the difference between poor and acceptable big data performance. This article reviews big data application implementation practices with some proactive tips on load testing to make your production implementation a success.
Big data applications are now in place to support business analytics processing. However, many standard performance tuning options (such as indexes) may no longer apply. Here we investigate the differences between operational systems and big data systems, and present common options for big data performance tuning.
Many IT enterprises have created big data applications to store and analyze massive amounts of historical data. Should these applications be installed in their own exclusive environment, or tightly integrated with current operational systems?
In its first phase of implementation, the big data application received and stored data from operational systems, allowing business analysts to use business analytics software to analyze the data for trends. Now we are in the next phase. Big data applications must now create value by feeding data back into operational systems. This becomes even more important during the busiest time of the year. What are the most important things for the IT staff to prepare?
Big data applications store huge amounts of data in a massively-parallel storage array and use sophisticated analytical software to return solutions to SQL queries. With multiple applications accessing enterprise data, we now need mechanisms to guarantee that the data is easily available. Here, we discuss a few common techniques to ensure that a big data application does not cause bottlenecks in mission-critical processes.
Most large companies have one or more big data applications, which provide fast access to large stores of customer and sales data. As the IT organization grows new job categories and new tasks are added to the mix. These include big data hardware and software support, business analysts who use analytics to probe and explore the data, and managers who must supervise and prioritize job tasks.
As the holiday season approaches, your organization can expect more BI activity as you take advantage of a significant increase in customer interactions. The smart IT organization should be proactive and prepare now for larger data volumes and more analytics activity by tuning their big data application for high performance.
In many cases, the success of a big data application can be traced to how well it is integrated into your enterprise data warehouse. This article presents several ways to get this done quickly and efficiently from the beginning.
Today, everyone realizes that in order to reach their full performance potential, Big Data applications require some tuning. The tuning isn't easy, it's not free, and responsibility for understanding requirements and implementing the appropriate tuning methodology falls squarely on the shoulders of the database administrator. Read on to learn more.
Big data applications are not usually considered mission-critical: while they support sales and marketing decisions, they do not significantly affect core operations such as customer accounts, orders, inventory, and shipping. Why, then, are major IT organizations moving quickly to incorporating big data in their disaster recovery plans?
As big data application success stories (and failures) have appeared in the news and technical publications, several myths have emerged about big data. This article explores a few of the more significant myths, and how they may negatively affect your own big data implementation.
Most large organizations have implemented one or more big data applications. As more data accumulates internal users and analysts execute more reports and forecasts, which leads to additional queries and analysis, and more reporting. The cycle continues: data growth leads to better analysis, which generates more reporting. Eventually the big data application swells with so much data and querying that performance suffers. How to avoid this?