Deployment: Users access the relevant metadata, depending on their needs.
If correctly designed and implemented, the goal of storage data is to drastically reduce the period required in the decision making process. To do so, it employs three equipment, namely Online Analytical Processing System (OLAP), data mining, and information visualization (Parankusham & Madupu 2006).
OLAP allows three features to be carried out.
– Query plus reporting: Ability to formulate questions without having to use the database programming vocabulary.
– Multidimensional analysis: The ability to carry out analyses from several perspectives. Tanler (1997) provides an sort of a product analysis that can be then recurring for each market segment. This enables quick comparison of data associations from different areas (e. g. simply by location, time, etc . ). This analysis can include customers, markets, items, and so on,
– Statistical analysis: This function attempts to reduce the large amounts of data into formulas that will capture the answer to the query.
OLAP is basically responsible for telling the user so what happened to the organization (Theirauf 1999). It thus enhances understanding reactively, making use of summarization of data and info.
What is Data Mining,
This is another process used to attempt to create useable knowledge or info from data warehousing. Data exploration, unlike statistical analysis, does not begin with a preconceived hypothesis about the information, and the technique is more suited for heterogeneous databases and date sets (Bali et al 2009). Karahoca plus Ponce (2009) describe data exploration as “an important tool for the mission critical applications to minimize, filter, extract or transform large databases or datasets into summarized information and exploring hidden patterns in knowledge discovery (KD).” The knowledge finding aspect is emphasized by Bali et al (2009), since the administration of this new knowledge falls inside the KM discipline.
It is over and above the scope of this site to provide an in-depth look at the data exploration process. Instead, I will present an extremely brief overview, and point visitors that are interested in the technical factors towards free sources of information.
Very briefly, data mining employs an array of tools and systems, including representational methods and statistical analysis. According to Botha et al (2008), symbolic methods look for pattern primitives by using pattern description languages in order to find structure. Statistical methods however measure and plot important features, which are then divided into lessons and clusters.
Data mining is an extremely complex process with different process versions. One is the CRoss-Industry Standard Process for Data Mining (or Crisp-DM). The process involves six methods (Maraban et al, in Karahoca & Ponce 2009):
Business knowing -> data understanding -> information preparation -> modeling -> assessment -> deployment
For more on information mining see the book “Data Mining and Knowledge Discovery in Real Life Applications”, modified by Ponce & Karahoca (2009), available for free from intechopen. com exactly where numerous other potentially relevant assets can also be downloaded.
This process involves representing data plus information graphically so as to better connect its content to the user. It is really a way to make data patterns a lot more visible, more accessible, easier to evaluate, and easier to communicate. Data creation includes graphical interfaces, tables, charts, images, 3D presentations, animation, and so forth (Turban & Aaronson in Parankusham & Madupu 2006).
DSS are usually other tools used in conjunction along with warehousing data. These are talked about in the following subsection.