SUPPORT VECTOR MACHINE FOR MULTICLASS CLASSIFICATION OF REDUNDANT INSTANCES

SUPPORT VECTOR MACHINE HAS BECOME ONE OF THE MOST IMPORTANT CLASSIFICATION TECHNIQUES IN PATTERN RECOGNITION, MACHINE LEARNING, AND DATA MINING.

AN EFFICIENT MACHINE LEARNING PREDICTION METHOD FOR VEHICLE DETECTION: DATA ANALYTICS FRAMEWORK

THE RISE IN POPULATION HAS LED TO A CORRESPONDING INCREASE IN THE NUMBER OF VEHICLES ON THE ROADWAYS.

STREAMLINING STOCK PRICE ANALYSIS: HADOOP ECOSYSTEM FOR MACHINE LEARNING MODELS AND BIG DATA ANALYTICS

INTEGRATING MACHINE LEARNING MODELS WITHIN THIS ECOSYSTEM ALLOWS FOR ADVANCED ANALYTICS AND PREDICTIVE MODELING.

COGNITIVE APPROACH USING SFL THEORY IN CAPTURING TACIT KNOWLEDGE IN BUSINESS INTELLIGENCE

THE COMPLEXITY OF BUSINESS INTELLIGENCE (BI) PROCESSES NEED TO BE EXPLORED IN ORDER TO ENSURE THE BI SYSTEM PROPERLY TREATS THE TACIT KNOWLEDGE AS PART OF THE DATA SOURCE IN THE BI FRAMEWORK.

TACIT KNOWLEDGE FOR BUSINESS INTELLIGENCE FRAMEWORK: A PART OF UNSTRUCTURED DATA?

IDEA TO CAPTURE KNOWLEDGE FROM DIFFERENT SOURCES CAN BE VERY BENEFICIAL TO BUSINESS INTELLIGENCE (BI).

A Logic Petri Net Model for Dynamic Multi-Agent Game Decision-Making


 

This study proposes a logical Petri net model to leverage the modeling advantages of Petri nets in handling batch processing and uncertainty in value passing and to integrate relevant game elements from multi-agent game processes for modeling multi-agent decision problems and resolving optimization issues in dynamic multi-agent game decision-making. Firstly, the attributes of each token are defined as rational agents, and utility function values and state probability transition functions are assigned to them. Secondly, decision transitions are introduced, and the triggering of the optimal decision transition is determined based on a comparison of token utility function values, along with an associated algorithm. Finally, a dynamic game emergency business decision-making process for sudden events is modeled and analyzed using the logic game decision Petri net. 

Based on reachable markings, reachable graphs are constructed to analyze the dynamic game process. Algorithms are described for the generation of reachable graphs, and the paper explores how the logic game decision model for sudden events can address dynamic game decision problems, generate optimal emergency plans, and analyze resource conflicts during emergency processes. The effectiveness and superiority of the model in analyzing the emergency business decision-making process for sudden events are validated. A sudden event is an emergency that poses direct risks and impacts human health, life, and property, requiring urgent intervention to prevent further deterioration. These intervention measures are organized into a process, which is typically described in an emergency plan and referred to as the emergency response process. 

In this process, all emergency personnel are dedicated to managing disasters to minimize or avoid the secondary impacts of the disaster. Generating better contingency plans before emergency responses have become an urgent issue to address. The uncertainty of evacuation time during emergencies, and its stochastic analysis was conducted by coupling the uncertainty of fire detection, alarm, and pre-movement with evacuation time.The forecasting model is event-dependent and takes into account many social and environmental elements regarding different sorts of events, such as socio-economic situations and geographical features. This is due to the great range of emergency occurrences, including both natural and man-made ones. The business decision-making process in disaster operations management varies greatly depending on the type of occurrence, taking into account factors like severity, impacted region, population density, and local environment, among others. 

There are many different types of hazards present worldwide. The health of vulnerable people is placed at risk by natural, biological, technological, and sociological dangers, which also have the potential to seriously impair public health. For instance, the authorities in-charge of providing clean water are responsible for the prevention of waterborne illnesses, while law enforcement and road transportation agencies are in charge of reducing traffic accidents. Zoonotic illnesses (diseases spread from animals to people) need coordinated action from the agricultural, environmental, and health sectors. These increases in new or reemerging diseases are attributed to a number of factors, including global warming, low vaccination rates in high-risk and vulnerable populations, growing vaccine resistance and skepticism, rising anti microbial resistance, and expanding coverage, frequency, and speed of international air travel. A professional who develops plans for emergencies, accidents, and other calamities is known as an emergency management director. Directors of emergency management work together with the leadership team of an organization to evaluate possible hazards and create best practices for handling them. Designing emergency procedures and developing preventative actions to lessen the risk of emergency circumstances occurring fall under their purview. Directors of emergency management play a crucial part in ensuring the safety of all employees and equipping staff to act effectively in case of an emergency. Plans for disaster preparation choose appropriate organizational resources, lay down the tasks and roles, establish rules and processes, and plan exercises to increase preparedness for disasters. The effectiveness of the response activities is improved when the needs of populations affected by catastrophes are anticipated. The effectiveness of the response operations is increased by increasing the ability of workers, volunteers, and disaster management teams to deal with crises. Plans could consist of the following: Sites for temporary refuge, and routes for evacuation water and energy sources for emergencies. Additionally, they might talk about stockpile requirements, communication protocols, training plans, chain of command, and training programs. One of the most crucial metrics for gauging the effectiveness of an evacuation is the time it takes. 

Residents who are detained for an extended period of time represent a serious threat to staff safety because of the unpredictability of events. A building’s inhabitants who attempt to flee during a fire accident exhibit a range of response times (RTs) between the time they are given a warning and the decision to leave. A number of complex factors, such as occupants’ familiarity with evacuation routes, their ability to operate evacuation amenities and fire protection apparatuses, the number of people in the area,and occupants’ psychological and physical conditions and behaviors, can affect how affected personnel are evacuated from a disaster site. Different factors have an impact on evacuation time (ET). The results indicate that it is a variable influenced by a significant number of uncertain factors, including emergency evolution dynamics, human behavior under emergency conditions, and the environment. The benefits of developing appropriate emergency response plans using safety and industrial hygiene resources to mitigate or prevent harm to factory personnel and nearby community residents caused by chlorine gas leaks. Everyone on the team has to be knowledgeable on how to spot leaks and react to them in order to keep the employees safe when handling chlorine. Since chlorine has a strong, unpleasant scent that resembles that of a potent cleaning solution like bleach, most chlorine leaks are quite easy to detect. Every facility that works with chlorine has to have an emergency kit on hand. This kit should include a variety of tools that may be used to stop or limit leaks around plugs, valves, or the side wall of a tank or cylinder used to store chlorine. Breathe in some fresh air and leave the location where the chlorine gas was emitted. If the community has an emergency notification system, be sure they are familiar with it. For directions, consult local authorities and emergency bulletins. If the chlorine discharge occurred outside, seek protection inside. 

To ensure that the contamination does not enter, make sure all windows are closed and ventilation systems are off. Leave the location where the chlorine was discharged if you are unable to get inside. Get outside and look for higher ground if the chlorine discharge occurred indoors. Open the windows and doors to the outdoors if the chlorine leak was caused by chemicals or home cleaners to allow infresh air. We focus on agent-based problem-solving strategies with business decision-making capabilities for CSC, which are based on Multi-criteria business decision-making methods (MCDM) methods for dealing with automated selection in CSC and PN techniques for modeling such context. Petri nets are used as modeling tools in the discrete-event dynamic process known as the multi-agent system. In comparison to alternating current micro grids, direct current micro-grids stand out for their ease of control and power management. They also offer a number of benefits, including higher conversion and transmission efficiency, greater reliability even in re-mote locations, convenient control, lower costs, and less filter effort due to the absence of reactive power, phase synchronization, high inrush current, etc. A rational actor must interact if enhancing subjective utility necessitates interaction with other agents. If there is contact between rational agents, at least one of the agents is trying to maximize his utility. Agents collaborate if their aims are the same. If their aims conflict, they engage in competition. 

The majority of these interactions occur between these two extremes. An interacting agent would do well to predict the objectives of other agents. A more well-informed actor may foresee some aspects of how other agents will act in response to their objectives. In these situations, strategic thinking is required. A contact in which strategic thinking occurs is referred to as a strategic interaction (SI). In game theory, SI or games are examined. The game theory takes into account reason and the potential to forecast rational behavior. The existence of widespread awareness of reason is assumed. This implies that each participant in an interaction believes in there a son of the others and that they, in turn, believe in his rationality, and so on.The equilibrium is the expected behavior of players or participants in an interaction. If one of the players strays from equilibrium, nobody wins. Because of this, it is termed equilibrium. In finite games, there is at least one equilibrium. At least two application agent and mechanism designs are required for artificial intelligence games. We have a game in agent design and must calculate appropriate behavior. We have an expectation about the behavior and must develop game rules in mechanism design. These two goals can be addressed theoretically by running algorithms over a game tree, or practically by creating an environment in which various real players can interact. Most games are written in low-level programming. Game rules are more easily editable. Algorithms may be created that change game representation in every way imaginable, such as ‘reduce number of players’ or ‘remove simultaneous turns’. 

Game representations may also be used to create evolutionary mechanisms. Logical Petri nets can further simplify the network structure of real-time system models, making it easier for us to analyze the properties of the system at a conceptual level, while also alleviating the problem of state space explosion to some extent. Petri nets can not only characterize the structure of a system but also describe its dynamic behavior. Currently, many scholars have proposed extended forms of Petri nets, such as logical Petri nets, timed Petri nets, and colored Petri nets, and their applications are becoming increasingly widespread. Multi-agent games involve multiple elements, such as players, strategies, utilities, and information equilibrium. The existing modeling elements of logical Petri nets cannot accurately describe these elements, so improvements need to be made to logical Petri nets. Based on the existing modeling elements of logical Petri nets, modifications or additions of new modeling elements are needed to model game elements, enabling the new model to accurately describe dynamic game problems in multi-agent systems. 

We consider a mean-field game (MFG)-like scenario where a large number of agents must select between a set of various potential target destinations. This scenario is inspired by effective biological collective decision mechanisms such as the collective navigation of fish schools and honey bees searching for a new colony. The mean trajectory of all agents represents how each person impacts and is impacted by the group’s choice. The model can be seen as a stylized representation of opinion crystallization in a political campaign, for instance. The initial spatial position of the agents determines their biases initially, and then in a later generalization of the model, a combination of starting position and a priori individual preference. The existence criteria for the specified fixed point-based finite population equilibrium conditions are developed. In general, there may be several equilibria, and for the agents to compute them properly, they need to be aware of all the beginning circumstances.

Download Full Paper

 

Spatial federated learning and blockchain-based 5G communication model for hiding confidential information

 


At present, the preferred method of transmitting a rapid blockchain message is to send several transactions, constituting a covert 5G communication technique. However, this approach is inadequate for processing larger quantities of sensitive data, and the potential for losing confidential information is significant. Additionally, the sender’s identity is not concealed. Despite the high embedding rate of steganography techniques, they are increasingly vulnerable to detection and statistical feature-based analysis. This investigation suggests a covert blockchain communication methodology that incorporates spatial federated learning and spatial blockchain as a means of fixing these issues. By utilizing Ciphertext-Policy Attribute-Based Encryption (CP-ABE) to encrypt the sensitive document and uploading it to the Inter Planetary File System (IPFS), the technique conceals sensitive files and the sender’s identity. Then, using image steganography based on Generative Adversarial Networks (GAN), the sender implants the hash value of the encrypted document into a carrier image. After uploading the encrypted image to IPFS, the sender creates a transaction with the hash value of the encrypted image. This transaction is then signed by a ring signature and broadcasted to the blockchain network for verification and confirmation. The recipient retrieves the encrypted document and decrypts it according to the access control policy established by CP-ABE. According to experimental findings, this model can increase the volume of sensitive data transmitted from KB to MB while providing higher confidentiality and security.

Download Full Paper

 

Exploring the Profound Influence of Machine Learning on Business Intelligence: A Comprehensive Review



In the dynamic landscape of data-driven decision-making, the intersection of Machine Learning (ML) and Business Intelligence (BI) has become a pivotal arena, propelling organizations toward more informed and strategic insights. The fusion of these two domains is characterized by a continuous evolution, marked by innovative trends that redefine how businesses extract value from their data. This synergy between ML and BI not only augments analytical capabilities but also transforms raw data into actionable intelligence, empowering organizations to navigate the complexities of the modern business environment. 

As we delve into the emerging trends in ML and BI integration, it is evident that the convergence of advanced analytics and business intelligence is ushering in a new era of efficiency, automation, and foresight. From augmented analytics and predictive modeling to the democratization of machine learning through automation tools, the landscape is evolving rapidly. This exploration will delve into key trends shaping this amalgamation, offering a glimpse into the future of data-driven decision-making where insights are not just discovered but dynamically generated, enabling businesses to stay ahead of the curve and make strategic decisions with unparalleled precision. 

The preponderance of technology is focused on the creation of value in businesses. Technology is a tool that creates value, and companies exist to facilitate the exchange of value between people. Technology is a tool that allows businesses to trade values more effectively and efficiently and create new values that may be shared, as explained above. Technology has a significant impact since it is continuously developing. Several areas for different sectors enhance how they work, especially machine learning, which helps businesses enhance their business process. Machine learning helps businesses make decisions as it has a strong relationship with business decision-making. 

The contribution of machine learning in companies is essential since it has a strong relationship with business intelligence and helps organizations make better decisions when it comes to decision-making. Without machine learning, business intelligence is ineffective indecision-making, and company leaders cannot make successful decisions without machine learning.Business intelligence (BI) is referred to as converting data into information, subsequently transformed into knowledge. When it comes to business intelligence (BI), the goal is to make better, more informed choices. Business intelligence assists businesses in collecting and analyzing data to detect trends and patterns. This information may then be utilized to enhance strategic planning, operational efficiency, and marketing initiatives, among other things. One of the most significant advantages of business intelligence is that it may assist firms in reducing waste and optimizing resources. An organization that determines that it is selling things that are not in great demand, for example, might change its inventory levels to reflect this information. Alternatively, suppose a company notices that a particular product is being returned at a higher rate than others. 

In that case, it may look into what could be causing the issue and take appropriate action. Organizations may also benefit from business intelligence in terms of improving customer service. Businesses may better know what consumers are searching for by watching their activity over time and analyzing the data. If a firm notices that its consumers are unhappy with its service, it may rectify the situation and improve customer satisfaction. Business intelligence (BI) is now critical component of many firms' day-to-day operations. Businesses benefit from it because it improves decision-making and helps them better understand their goods and services. The better fulfilling consumer wants, increasing sales, providing better service to customers, lowering expenses, maximizing resources, and minimizing waste improve firms' bottom lines. Recently, we've observed integrating machine learning capabilities into business intelligence systems, making BI considerably more successful at uncovering hidden insights. BI solutions that can efficiently combine these skills in a user-friendly manner will soon become the standard. As consumers get used to this feature, they will expect it to be available at all times. 

GPS and other technology that we now can't fathom our lives without are examples of this. Combining these capabilities automates the process of unearthing insights that business users were not aware were available until they were discovered. For example, on a typical dashboard, a business user looking at their top-line sales would conclude that the trend seems to be in good shape and that there is no need to investigate deeper. However, there may be grounds for worry in the fine print, in the underlying makeup of the sales figures, which is difficult to discern. Some items may be doing well, while others may be exhibiting a deterioration in performance. This critical understanding is concealed from public view. Additionally, automation of this process results in insights being supplied much more rapidly, enabling the company to respond quickly and with better information. It allows the business to act faster and with better information. 

Automating these procedures should allow the analyst to spend more time on other responsibilities in their organizations. Many analysts are engaged in regular chores such as variance analysis, the search for anomalies, and the creation of comments for inclusion in reports, among other things. The analyst will devote more time to higher-value activities.In data analysis, machine learning models successfully uncover hidden patterns and insights. For many decades, data professionals have used these strategies to tackle technical and challenging business challenges. Because of improvements in computing power, it is now possible to construct and execute these complicated mathematical models on a more accessible platform. Models that used to need costly, high-end technology can now run on commodity platforms accessible to everyone, regardless of their financial situation. 

Machine models are categorized into Supervised, Unsupervised, Semi-supervised Learning, and Reinforcement Learning models, and these models have several algorithms which can be used for Business intelligence (BI) such as Feedforward Neural Network (FNN), Artificial Neural Networks (ANN), Support Vector Machine (SVM) algorithm, KNN algorithm, etc. This paper will perform a comprehensive review of machine learning models used in business intelligence. Furthermore, we will review the impact of machine learning on business intelligence.

Download Full Paper

 

An Efficient Machine Learning Prediction Method for Vehicle Detection: Data Analytics Framework

 


The availability of transportation is considered a significant hallmark of a developed society. Since the evolution of the human species, the imperative to relocate from one location to another has been a fundamental requirement. At present, there exists a plethora of transportation options in Indonesia. However, most individuals favor road transportation due to its ease and convenience. The rise in population has led to a corresponding increase in the number of vehicles on the roadways. Hence, it presents a challenge for security authorities and governmental bodies to oversee all automobiles' mobility across various locations effectively. 

The present study proposes a methodology for detecting and tracking vehicles using video-based techniques. The process's initial stages involve preprocessing, including frame conversion and background subtraction. Next, the process of detecting vehicles involves the utilization of change detection and a model of body shape. Subsequently, the next stage entails the feature extraction process, focusing on extracting energy features and directional cosine. Subsequently, a technique for optimizing data is employed on the vector comprising excessively extracted features. The methodology integrates a data mining technique based on association rules, which is subsequently complemented by a random forest classification algorithm. The approach generally integrates multiple methodologies to attain effective and precise identification of automobiles in video-derived datasets.

Traffic disruption is a prevalent issue in Indonesia, particularly in the province of Special Capital District (DKI) Jakarta. The authorities have implemented multiple measures to mitigate traffic disruption in Jakarta. One of these initiatives involves the establishment of the Jakarta Smart City information system. The Jakarta Smart City information system harnesses closed-circuit television (CCTV) data from multiple sources, such as the Transportation Agency (DisHub), Bali Tower, the Public Works Service (PU), and Transjakarta, among others. Around 6,000 CCTVs are distributed across the Jakarta region, with their real-time data being transmitted and displayed on the portal of the Jakarta Smart City system. Quick detection of vehicles becomes necessary to provide inattentive drivers with sufficient time to avoid traveling conflicts and thus minimize the likelihood of rear-end collisions. Moreover, the current techniques for traffic surveillance that count automobiles using electric circuits on the road are costly. All of these factors necessitate the investigation of novel and favored techniques for the vehicle recognition task. Typically, the primary objective of detecting vehicles is to identify potential vehicle positions within an image and designate them as areas of interest (A.O.I.) for subsequent processing tasks. In contrast, computerized automobile identification is a complicated and intrinsically tricky task.

To detect moving vehicles on avenues, reliable systems and programs with efficient extraction methods are required. Real-time traffic inputs produce an enormous volume of data every day; to manage such a large quantity of data, artificial intelligence (A.I.) and computer vision methods are combined to improve the precision of the framework. This recent technological advancement has reduced human and labor needs. A robust video-based surveillance apparatus must be adaptable to the environment's behaviors. However, threats such as trembling cameras and noise interference still exist. Recognizing vehicles during the day is difficult because lengthy reflections cast by the sun can lead to misclassification or interference. In contrast, night vision detection presents difficulties due to the lack of adequate enlightenment, making it difficult for the classifier to identify effectively. Identifying target motion using artificial intelligence (A.I.) technology is one of the foundations of automobile environment sensing. Moving objects in conveyance typically refer to automobiles or individuals available in operating conditions. Additional immobile things, including transportation systems and vegetation, are typically called landscapes. 

To obtain the desired format, it is necessary to distinguish moving components from the background contemporaneously by examining the video input footage extensively. Diverse strategies were employed to establish technologies capable of detecting, counting, and classifying automobiles for use in automated transport platforms' traffic tracking. This section addresses the subject matter of these kinds of systems and an understanding of the methodologies used in creating them. Naz et al. presented a video-based actual time tracking of vehicles using the optimized simulated loop methodology. The researchers utilized real-time traffic monitoring equipment installed along roads to determine the number of vehicles that traveled on the road. In this approach, accounting is done in three stages by monitoring the vehicle's movements throughout an imaginary loop monitoring zone. Ukani et al. presented an alternative video-based vehicle identification approach. In this approach, comparatively high-mounted observation cameras were employed for collecting a roadway video feed; the Adjustable framework estimating, and the Gaussian shadowing reduction consisted of the two primary techniques used. The system's precision depends on the viewing angle and its capacity to eliminate shadowing and phantom effects.

Download Full Paper

 

Streamlining Stock Price Analysis: HadoopEcosystem for Machine Learning Models and BigData Analytics



The rapid growth of data in various industries has led to the emergence of big data analytics as a vital component for extracting valuable insights and making informed decisions. However, analyzing such massive volumes of data poses significant challenges in terms of storage, processing, and analysis. In this context, the Hadoop ecosystem has gained substantial attention due to its ability to handle large-scale data processing and storage. Additionally, integrating machine learning models within this ecosystem allows for advanced analytics and predictive modeling. This article explores the potential of leveraging the Hadoop ecosystem to enhance big data analytics through the construction of machine learning models and the implementation of efficient data warehousing techniques. The proposed approach of optimizing stock price by constructing machine learning models and data warehousing empowers organizations to derive meaningful insights, optimize data processing, and make data-driven decisions efficiently. The proliferation of data has transformed the way organizations operate. The ability to extract valuable insights from vast amounts of data has become a competitive advantage across industries. However, traditional data processing and analysis techniques are insufficient to handle the sheer volume, velocity, and variety of big data.
This necessitates the adoption of advanced technologies and frameworks, such as the Hadoop ecosystem, to overcome these challenges. In recent years, the prevalence of big data technology has revolutionized numerous industries, including retail, manufacturing, healthcare, and finance.The utilization of big data has proven instrumental in enhancing operational efficiency by harnessing valuable insights derived from data analysis. This research paper focuses on investigating the application of big data analytics in the context of the stock market, utilizing a publicly available dataset sourced from The New York Stock Exchange (NYSE). By leveraging big data analysis, organizations can identify trends, patterns, and correlations that enable informed decision-making processes. Particularly in the stock market, analysis plays a pivotal role for investors and traders in assessing a company's intrinsic value before executing buying or selling decisions.
The widespread adoption and efficacy of big data technology is largely attributable to the evolution of multifarious frameworks and platforms that cater to the manipulation and scrutiny of colossal data sets. Apache Hadoop takes a preeminent position among these big data platforms, ingeniously amalgamating the powerful MapReduce paradigm and the durable Hadoop Distributed File System (HDFS) for proficient data governance. This technology has been embraced ubiquitously across a myriad of sectors, empowering organizations to distil pertinent insights, thus refining their decision-making apparatus. A case in point is the New York Stock Exchange (NYSE) that has judiciously harnessed big data technology, with a particular emphasis on Apache Hadoop, to conduct in-depth analysis of market fluctuations and draw data-oriented verdicts, conferring upon them a competitive superiority. In parallel, Apache Spark has emerged on the scene as a sought-after big data framework, renowned for its expedited processing velocity and its superior versatility in handling data, thereby outpacing the capabilities of its counterpart, Apache Hadoop.The New York Stock Exchange (NYSE) can harness the capabilities of Apache Hadoop’s MapReduce and Apache Spark frameworks to process and decipher vast quantities of financial data. As illustrated in Table 1, Spark offers superior processing speed and enhanced flexibility in data manipulation, rendering it a prime candidate for processing and analyzing real-time data pertinent to the financial sector, more specifically, within the ambit of stock exchanges. This proves particularly valuable in the dynamic realm of finance where instantaneous data insights are paramount to the decision-making process. In addition, Spark's fundamental component, the Resilient Distributed Dataset (RDD), presents an advantageous data processing approach within distributed systems, exhibiting higher efficiency and fault tolerance compared to MapReduce.RDD programming can be employed for data transformations, including mapping and filtering, as well as operations like counting and collecting. Given its ability to be cached in-memory, RDD enhances data access efficiency. Consequently, Spark can confer a competitive edge to stock exchanges, such as the NYSE, requiring the capability to process and dissect voluminous real-time financial data in order to maintain their standing in the brisk-paced financial industry.

Download Full Paper

Support Vector Machine for Multiclass Classification of Redundant Instances


 In recent years, support vector machine has become one of the most important classification techniques in pattern recognition, machine learning, and data mining due to its superior classification effect and solid theoretical base. 

However, its training time will increase dramatically as the number of samples increases, and training will become more sophisticated when dealing with problems involving multiple classifications. A quick training data reduction approach MOIS appropriate for multi-classification tasks is presented as a solution for the aforementioned issues. While eliminating redundant training samples, the boundary samples that play a vital role are chosen in order to considerably reduce training data and the problem of unequal distribution between categories. 

The experimental results demonstrate that MOIS may maintain or even improve the classification performance of support vector machines while substantially enhancing training efficiency. On the Opt digit dataset, the suggested method improves classification accuracy from 98.94% to 99.05%, while training time is reduced to 15% of the original; in HCL2000, the proposed method improves classification accuracy from 98.94% to 99.05%. When the accuracy rate is marginally increased (from 99.29% to 99.30%) on the first 100 categories dataset, the training time is dramatically reduced to less than 6% of the original. Additionally, MOIS has a high operational efficiency.

 

Get the Paper (Closed Access)

Cognitive Approach Using SFL Theory in Capturing Tacit Knowledge in Business Intelligence



The complexity of Business Intelligence (BI) processes need to be explored in order to ensure the BI system properly treats the tacit knowledge as part of the data source in the BI framework. Therefore, a new approach to handling tacit knowledge in the BI system still needs to be developed. The library is an ideal place to gather tacit knowledge. It is a place full of explicit knowledge stored in various bookshelves. Nevertheless, tacit knowledge is very abundant in the head of the librarians. The explicit knowledge they gained from education in the field of libraries and information was not sufficient to deal with a complex and contextual work environment. The complexity comes from many interconnected affairs that connect librarians with the surrounding environment such as supra-organizations, employees, the physical environment, and library users. This knowledge is contextual because there are various types of libraries and there are different types of library users who demand different management. Since tacit knowledge hard to capture, we need to use all possible sources of externalization of tacit knowledge. The effort to capture this knowledge is done through a social process where the transfer of knowledge takes place from an expert to an interviewer. For this reason, it is important for the interview process to be based on the SFL theory (Systemic Functional Linguistics).

The cognitive approach is ideally suited for capturing knowledge as from among the massive data available these days. The decision-maker typically must integrate multiple streams of information from the information or other collaboration with the knowledge systems in making decisions [1]. Furthermore, decisions may be based on organizational politics or routines [2], and decision-makers may limit themselves to a few choices because of “bounded rationality” [3]. Ducharme and Angelelli [4] invented the use of cognitive as advanced analytics to capture and extract tacit knowledge by elaborating the predictive analytics, stochastic analytics, and cognitive computing. Moreover, the advanced analytics approach still is implemented in the Business Intelligence (BI) environment [17]. Thus, the basic BI framework involving a tacit knowledge approach can be illustrated as shown in Figure 1.

There is a small number of earlier research about business intelligence in the academic library and library profession. An example of this research is Cox and Janti [5] on the Library Cube project, a business intelligence system that demonstrates the value that can be provided by academic libraries. However, the research is not targeting the tacit knowledge at all since it is only targeting the provided information in the academic information system. Heims et al [6] mention that reporting BI research and creating BI reports are the key area of responsibility of librarians in the information era. We addressed the problem by open dialog with the librarian, which actually what considered would happen between BI manager and librarian to develop clear communication channels [7]. Noted that for librarians, BI is part of their challenge in the information era [8].
Since tacit knowledge is hard to capture, we need to use all possible sources of externalization of tacit knowledge [9]. The effort to capture this knowledge is done through a social process where the transfer of knowledge takes place from an expert to an interviewer. For this reason, it is important for the interview process to be based on the SFL theory. 

According to SFL theory, only a fraction of “can-do” turned into “can mean” and only a fraction of “can mean” turned into “can say” [9]. This is what is meant by Polanyi when he said “we know more than we can tell” [10]. Hence, only a portion of tacit knowledge can be captured by linguistic means. We need other means that came up from “can mean” which anything that could analyze semiotically. It could be non-verbal cues or drawing, written text, etc.  We refer to drawing, photograph, videos, written text, and others as the documented sources and beyond our analysis. Here we just focused on non-verbal cues. However, whenever documented sources considered relevant, we could use it as a source of tacit knowledge.

A. Linguistic Source of Tacit
According to SFL theory, language is realized in four strata: semantic, lexicogrammar, phonology, and phonetics [11]. Semantics is the highest level that explains the hidden meaning of language. Lexicogrammar is an aspect of language that explains the real meaning, can be seen from the choice of words and grammar used. Phonology is the meaning that exists in sound. Phonetics is speech that arises from language activities. It can be seen that this stratification moves from something abstract (semantic) to something concrete (phonetic).
Someone will choose a word to represent his experience when speaking. What word or wording was chosen can distinguish whether the experience or knowledge expressed is an inheritance or not. In fact, sometimes, a person will find it difficult to find the right words to describe their knowledge so that they choose.

Even after knowledge has been expressed verbally and non-verbally, there is still space where the knowledge of tacit cannot be expressed at all and can only be demonstrated by behavior. Apart from observations requiring precise and specific time, experts generally do not like being observed while working [12]. In addition, observations become more complicated when several experts are involved [12]. This can only be done in a non-intrusive manner such as a surveillance camera, but it can be a problem with privacy issues. Alternatively, observations can be made through third-person testimonies. In this case, the interview was conducted on the third person who had witnessed the behavior of the first person who was the target to reveal the knowledge of his possessions.

The framework above shows the design used to capture the comprehensive knowledge of experts. Based on the SFL theory, tacit knowledge consists of three levels. The first level is the most basic level where a person can only do but cannot interpret it, let alone say it. This knowledge is contextual tacit knowledge because it can only be raised in a supportive context. It can only be collected through observation. Even so, because the context is very specific, in terms of space and time, only people present in that context can see and understand from their perspective what the tacit knowledge is. In this study, it is assumed that the person is a peer. Researchers collected data on tacit knowledge from peers through cognitive interviews. Furthermore, we can conclude there are two ways to collect tacit knowledge:
1. Focused on the stated problem. Participant presented with a problem which needs tacit knowledge to be solved. The tacit knowledge needed to solve this problem can be collected with interviews, based on respondents chosen with questionnaires. Questions in the interview informed by problems urgency, detected by questionnaire. Here, sequences of the steps determine the completeness of tacit knowledge. The figure below show the connection between questionnaire design and decision.


Data Analitik dan Data Analisis serta Data Science

Four Types of Data Analytics. Sumber: Intellipaat
Beberapa orang percaya bahwa Data analitik dan Data Analisis memiliki makna yang sama. Dari situ terkadang beberapa orang menggunakannya secara bergantian. Secara teknis ini tidak benar. Sebenarnya ada perbedaan yang jelas antara keduanya. Jadi mari kita bahas perbedaan yang tidak begitu jelas antara istilah analisis dan analitik karena meskipun memiliki kesamaan kata-kata, namun memiliki pengertian berbeda.
Pertama kita akan mulai dengan analisis.
Pertimbangkan yang berikut ini.

Anda memiliki kumpulan data yang besar dan berisi data dari beragam jenis yang berbeda. Agar menghindari risiko kesalahan atau agar tidak kewalahan dalam memahami data tersebut, kemudian anda memisahkan setiap data yang anda peroleh sehingga lebih mudah untuk mencerna potongan-potongan data dan mempelajarinya secara individu dan memeriksa bagaimana mereka berhubungan dengan bagian lain. Sampai di sini, dapat kita simpulkan bahwa anda sedang melakukan Analisis pada data yang anda peroleh.

Namun satu hal penting yang perlu diingat adalah bahwa Anda melakukan analisis pada hal-hal yang telah terjadi di masa lalu. Misalnya seperti melakukan analisis untuk menjelaskan bagaimana akhir dari pencapaian target penjualan atau bagaimana historis penurunan curah hujan musim panas lalu.
Semua ini berarti kita melakukan analisis untuk menjelaskan bagaimana dan atau mengapa sesuatu terjadi.

Sekarang mengenai Analitik (Analytics).
Analytics umumnya mengacu pada masa depan, alih-alih menjelaskan peristiwa masa lalu. Dengan kata lain adalah mengeksplorasi potensi masa depan. Analytics pada dasarnya adalah penerapan penalaran logis dan komputasi untuk bagian komponen yang diperoleh dalam analisis. Dalam melakukan kegiatan analitik ini, Anda mencari pola dalam mengeksplorasi apa yang dapat Anda lakukan di masa depan.

Di sini analitik bercabang menjadi dua bidang. Kualitatif dan Kuantitatif.

Analitik kualitatif biasanya menggunakan intuisi dan pengalaman Anda bersama dengan analisis untuk merencanakan langkah bisnis Anda berikutnya (yang seringnya digabungkan bersamaan dengan teknik analisis kuantitatif dengan cara menerapkan rumus dan algoritma ke angka yang telah Anda kumpulkan dari analisis Anda).

Misalnya, katakanlah Anda adalah pemilik toko pakaian online. Anda unggul dalam persaingan dan memiliki pemahaman yang baik tentang apa kebutuhan dan keinginan pelanggan Anda. Anda telah melakukan analisis yang sangat rinci dari artikel pakaian wanita dan merasa yakin tentang tren mode mana yang akan diikuti. Anda dapat menggunakan intuisi ini untuk memutuskan gaya pakaian mana yang akan mulai dijual. Ini akan menjadi analisis kualitatif tetapi Anda mungkin tidak tahu kapan harus memperkenalkan koleksi baru. 

Dalam hal mengandalkan data penjualan sebelumnya dan data pengalaman pengguna, Anda dapat memperkirakan pada bulan apa yang terbaik untuk melakukannya dengan dasar perhitungan kuantitatif. Yang mana, analisis kuantitatif melibatkan angka dan perhitungan spesifik. Dalam hal ini, Anda melakukan analisis kualitatif untuk menjelaskan bagaimana atau mengapa, serta melakukan analisis kuantitatif dengan data masa lalu untuk menjelaskan bagaimana penjualan menurun musim panas lalu untuk memperbaikinya di masa yang akan datang.

Kemudian bagaimana hubungannya dengan Data Sains? Data Sains adalah hasil yang diperoleh atau kegiatan dari (katakanlah) ahli statistik yang mengikuti teknologi modern. Untuk lebih jelasnya bisa dibaca pada postingan: Terminologi Ilmu Data (Data Science) dalam Kegiatan Bisnis

Ilmu Data (Data Science) dalam Kegiatan Bisnis

Four Types of Data Science Jobs sumber: Udacity
Mengapa data sangat penting? Apa yang begitu penting tentang data dan hubungannya dengan bisnis yang sehat? Seiring dengan berjalannya sebuah perusahaan, apakah dengan ketersediaan data atau tidak, sangat dapat disimpulkan bahwa data adalah dasar dari setiap perusahaan yang sukses. Selain itu, pihak manajemen level dalam suatu perusahaan sadar bahwa dengan mendapatkan data yang spesifik akan sangat membantu perusahaan dalam bersaing.

Dalam sebuah perusahaan terdapat tim yang bekerja sebagai pengolah data. Kita sebut saja Tim Data. Tim data memiliki satu tujuan yaitu ingin menyelesaikan masalah dalam bisnis perusahaan. Tim akan melakukan sejumlah besar pekerjaan pada data yang tersedia sesuai dengan masalah yang timbul pada perusahaan.

Simple Business Glossary Example. sumber: Ewsolutions

Kemudian dalam tim data tersebut, terdapat tim business intelligence yang menyajikan dashboard bisnis atau dapat dikatakan penyajian data mengenai apa yang telah terjadi pada masa yang telah lalu.

Business Intelligence Dashboard. sumber: Ducenit

Tim data ini kemudian menggunakan beberapa teknik bisnis analitik atau alat analisis data untuk mengembangkan model yang dapat memprediksi hasil di masa yang akan datang.


Agar tidak membingungkan, mari kita bahas mengenai Data Science terlebih dahulu.
Salah satu penyebab kebingungan mengenai Data Science dewasa ini salah satunya disebabkan oleh evolusi terus-menerus dari berbagai cabang ilmu yang mempelajari data yang melahirkan banyak istilah terminologi ilmu yang mempelajari mengenai data. Salah satunya adalah data science atau ilmu data. Seseorang yang memiliki gelar ahli statistik dua puluh lima tahun yang lalu memiliki tanggung jawab untuk mengumpulkan dan membersihkan beberapa data set dengan menerapkan berbagai metode statistik. Namun dengan pertumbuhan data dan peningkatan teknologi yang cukup signifikan, ahli statistik ini pada akhirnya mampu mengekstrak pola dari data yang ada atau yang telah dianalisa.

Business and Data Science Buzzwords. source: Udemy Course

Ekstraksi pola ini misalnya, berawal dari ahli statistik yang mengembangkan model matematis dengan tujuan untuk melakukan perkiraan yang lebih tepat dan akurat. Kemudian beberapa tahun setelahnya, ahli statistik yang sama, dengan model matematika dan metode statistik baru yang mampu melakukan perkiraan yang lebih akurat melahirkan Datamining. 

Business and Data Science Buzzwords. source: Udemy Course

Kemudian dihasilkan data lebih berkualitas untuk melakukan prediksi. Terminologi Analitik Prediktif pun membuat ahli statistik menjadi seorang ilmuwan data atau Data Scientist yang telah mengikuti teknologi modern.

Business and Data Science Buzzwords. source: Udemy Course



Tacit Knowledge for Business Intelligence Framework: A Part of Unstructured Data?

Idea to capture knowledge from different sources can be very beneficial to Business Intelligence (BI). Organizations need to collect data sources from type of structured and unstructured, including individuals' tacit knowledge in order to have the better output in data analysis. Therefore, the complexity of BI processes need to be explored in order to ensure the process will properly treat the tacit knowledge as a part of the data source in BI framework. Moreover, the linkage between unstructured data and tacit knowledge is generally consistent, for the reason that one of tacit knowledge characteristic is unstructured, which is difficult to capture, codify, estimate, investigate, formalize, write down, and communicate accurately. Cognitive approach is ideally suited for the capturing tacit knowledge as from among the massive data available these days. Typically, the organization must integrate multiple streams of data from several sources or other collaboration resources with the knowledge systems for making the decisions. This paper explores the possibility of tacit knowledge used in BI framework to perform data analysis for decision makers.

Introduction

Raw data or information retains within the organization in the form of explicit, implicit and tacit knowledge with limited resources. Several researches have been conducted in Business Intelligence (BI) and Knowledge Management (KM) domain to solve the problem by using tacit knowledge for data analysis. Yet, the new information, knowledge, and un-structured data are used to improve the decision making. The raw data and information need to be processed to acquire knowledge through the use of the analytical approach, which is normally the analyst will use the descriptive or predictive analysis approach to produce results for making a decision.

The idea of taking knowledge from different sources can be very beneficial to BI, especially for tacit knowledge. The identifying content or “data” from authors or experts in the form of tacit and “know-how” knowledge is important to be used for data analysis. Currently, the use of BI applications for managing and analyzing the explicit knowledge is the major portion of the enterprise software of BI for data analytics. Therefore, the requirement of BI application that can support for managing the tacit knowledge is crucially important. This paper will start with a discussion on how the tacit knowledge can be part of unstructured data and later can be used for data analysis in BI framework. Even though several models and frameworks have been proposed by many researchers, but the limited framework for BI system still needs to be explored. Additionally, the traditional method of BI framework can be enhanced by using the cognitive approach to handle the capturing of tacit knowledge sources. 

Tacit knowledge needs to be converted to either structured or unstructured data to being codified in the BI system. The proposed model for managing tacit knowledge is developed by using KID model and cognitive approach to capture and extract tacit knowledge, and develop a new data centric model that works with traditional structured data as well as unstructured data including video, image, and digital signal processing.

METHODOLOGY

This research will adapt the hybrid methodology of qualitative and quantitative approach due to handling experiment with the human knowledge. This research has investigated the limitations of BI framework in capturing various data types to identify the problem in handling the tacit knowledge for data analysis. This has built the gap in this research and worth it to explore the solution for this problem. Several studies have been conducted in the field of BI and KM, but lacking of research work which has explored the solution for tackling tacit knowledge of data analysis in the BI system. Therefore, we argue must be a study to propose a method to handle the tacit knowledge and later can be used for data analysis in BI framework. The traditional method of BI framework will be enhanced by using the cognitive approach to handle the tacit knowledge of BI framework for data analysis.

Cognitive Approach

Cognitive Analytics

Cognitive Mapping


PROPOSED BI FRAMEWORK

The key of BI is to capture, analyze, and share such knowledge. The process of capturing knowledge with the cognitive approach might be useful in order to improve the predictive and prescriptive results BI framework. Authors show the generated KID generated model that consists of three elements, which are: D, I, and K, and also Knowledge repository, named K-store as shown in Figure 4 and Figure 5. The capital D refers to data which represent the observable properties of objects in the external world. The capital I represent the information, as the result of data which being interpreted by existing knowledge which is referred to what human have said.

The capital K refers to knowledge which is formed by assimilating the information into existing knowledge or derived from updating knowledge. D, I, and K are interrelated. Their interrelationships are defined by the three transformation functions. The KID model is a cognitive model, since data are is a cognitive process from data to knowledge. It adopts the results of psycologists' investigations, simulates human information processing and built based on our argument that any cognitive model can be built

with three transormation process from data to knowledge. The implication for the relations of data, information, knowledge and wisdom still lacks explicit and pragmatic approaches. Yet, tacit knowledge contains wisdom, where wisdom is solely owned by humans. From the model as shown as Figure 4 above, authors stated that knowledge as the basic unit of wisdom, where wisdom is also probabilistic. The cognitive approach is suitable for the "more than one" hypotheses to be analyzed. Moreover, as it is a kind of decision support that allows people to explain new opportunities, which has an impact in a positive manner. The key to BI is to capture, analyze, and share such knowledge. Thus, the process of capturing knowledge with the cognitive approach might be useful in order to improve the predictive and prescriptive results in BI applications.


. . .


Download Full Paper

PENGELOLAAN PENGETAHUAN DAN INFORMASI DALAM BUSINESS INTELLIGENCE: PENDEKATAN KOGNITIF ANALITIK

Masalah terbesar untuk Knowledge Management (KM) adalah pada bagian pengetahuan individu yang bersifat tacit. Yang mana, pengetahuan tacit adalah pengetahuan dan pemahaman yang terdapat di dalam otak/pikiran individu, atau keahlian dan pengalaman seseorang yang mana biasanya pengetahuan ini tidak terstruktur, susah untuk didefinisikan, dan isinya mencakup pemahaman pribadi. Lebih jauh lagi, pengetahuan tacit bisa hilang jika terjadi merger, reorganisasi, dan apabila terjadi perampingan dalam sebuah organisasi. Analisis kontekstual yang didukung oleh sistem kognitif adalah sistem analisis lanjutan yang digunakan untuk mengumpulkan pengetahuan tacit. Teknik analisis kontekstual seperti peringkat relevansi digunakan selain pemodelan relasi entitas, ekstraksi entitas, penandaan suku cadang, dan sebagainya. Dengan demikian, data dapat dianalisis dalam sekumpulan pengetahuan implisit dan eksplisit. Yang mana pengetahuan eksplisit merupakan pengetahuan yang bersifat formal dan sistematis yang mudah dikomunikasikan dan dibagi, yang mana pada umumnya pengetahuan ekspilist dapat dengan mudah diperoleh dalam bentuk tulisan atau dokumentasi. Sementara itu, pengetahuan implisit merupakan sebuah kemampuan yang dapat dengan mudah ditransfer dengan menggunakan praktik, atau dengan memberikan contoh, seperti misalnya mengendarai sepeda atau kenderaan. Implisit merupakan pengetahuan yang dikumpulkan sehingga menjadi pakar berdasarkan pengalaman. 

Lebih jauh lagi, jika pengetahuan implisit dan berbagai perspektif disertakan dalam sebuah analisis, maka sebuah analisis yang bersifat kontekstual dapat menjadi analisis kognitif. 

Tulisan ini akan mengeksplorasi pendekatan kognitif untuk menganalisis KM di lingkungan Business Intelligence (BI).

Manajemen Pengetahuan atau KM merupakan sebuah alat strategis yang memiliki tujuan untuk membangun informasi dalam Intellectual Capital (IC) dalam sebuah organisasi. Manajemen biasanya menggunakan KM tool sebagai alat yang paling efisien untuk mengubah individu menjadi aset yang berharga. Selain itu, setiap tindakan efisiensi yang dilakukan dalam organisasi akan lebih mungkin dilakukan jika setiap organisasi telah melakukan proses BI pada jalur yang benar. Oleh karenanya, BI terkait erat dengan keberhasilan yang dicapai oleh KM. Suatu organisasi kemungkinan menghadapi masalah ketika sampai pada titik pelaksanaan dikarenakan kurangnya informasi yang diperoleh. Sementara itu, teknologi yang terdapat pada BI memainkan aturan penting dalam pengelolaan informasi dalam skala besar yang lebih baik. Namun, meningkatkan keterampilan setiap individu dalam sebuah organisasi bukanlah tugas yang mudah. Butuh waktu sebelum keterampilan yang diharapkan dapat diperoleh. Itulah sebabnya transfer pengetahuan sangat penting dalam organisasi terutama dalam proses menjelaskan pengetahuan dari satu individu sehingga mampu dipelajari dan diadaptasi oleh entitas manapun.

Business Intelligence

BI terdiri dari proses bisnis penting yang mengumpulkan dan menganalisis informasi untuk keputusan dan tindakan bisnis terutama pada penggunaan alat informasi untuk meningkatkan kinerja bisnis. BI terdiri dari teknologi, proses dan implikasi yang memungkinkan perolehan, penyimpanan, pengambilan dan analisis data untuk pengambilan keputusan yang lebih baik. On-Line Analytical Processing (OLAP) adalah sebuah alat BI yang memungkinkan pencarian dan pengujian data yang relevan beserta perhitungan dan identifikasi hubungan. Data mining dapat digunakan dalam proses mengidentifikasi tren, pola dan hubungan antara sejumlah besar data. Data mining menggunakan teknik statistik dan matematis seiring dengan teknologi. Sistem Pendukung Keputusan (SPK) adalah asosiasi manusia dan mesin untuk penyediaan informasi yang otentik dan berguna untuk mendukung manajemen dalam pengambilan sebuah keputusan. OLAP adalah salah satu komponen penting BI yang digunakan dalam melakukan sebuah proses analisis. OLAP memiliki beberapa bentuk, diantaranya adalah klasifikasi, pola sekuensial, analisis regresi dan link. Dengan demikian, proses BI adalah pendekatan yang relevan untuk menganalisis data pengetahuan yang dibutuhkan untuk menangkap dan menganalisis pengetahuan.

Manajemen Pengetahuan (Knowledge Management)

KM adalah teknik pencarian, akuisisi, pengorganisasian dan komunikasi informasi dan pengetahuan dalam sebuah organisasi. Pengetahuan bisa tersirat (tacit) atau eksplisit yang berkaitan dengan pemahaman kepemimpinan, usaha kelompok, pengalaman individu, dan jiwa karyawan. Akuisisi informasi yang relevan adalah proses mengidentifikasi dan menangkap materi yang terkait erat dengan tujuan saat ini. Pengambilan informasi adalah tahap kedua dari proses KM dimana organisasi mengeluarkan informasi spesifik dari berbagai sumber. Pengetahuan yang diambil dari organisasi akan diproses dengan menggunakan BI tool, teknik, atau framework, dan kemudian penggunaan pendekatan kognitif untuk pengetahuan tacit akan digunakan sebagai bagian dari solusi analitik.

Pendekatan Kognitif untuk Menangkap Pengetahuan

Pendekatan kognitif mampu merekam, menganalisa, mengingat, belajar, dan menyelesaikan masalah dari informasi yang tersedia dari pengetahuan dan pengalaman individu. Sistem kognitif saat ini juga dapat melakukan transfer pengetahuan dan menjadi praktik terbaik dalam kegiatan analisis data. Dalam kasus penggunaan ini, sistem kognitif dirancang untuk membangun dialog antara manusia dan mesin sehingga dapat dipelajari oleh sistem. Selama setiap pengetahuan bersifat probabilistik, selalu dipengaruhi oleh faktor manusia dan sosial, dan membutuhkan cara kognitif untuk dikelola, maka pendekatan kognitif cocok untuk hipotesis yang lebih dari satu untuk dianalisis. Oleh karena itu, tulisan ini akan menjelaskan penggunaan pendekatan kognitif untuk pengelolaan pengetahuan di lingkungan BI.

Metodologi

Teknik penelitian kualitatif telah diadopsi untuk tulisan ini. Teknik kualitatif ini meliputi analisis tinjauan dari berbagai literatur terhadap penelitian terdahulu dan model KM dan BI yang telah diusulkan. Kerangka teoritis sebagai pondasi penelitian juga telah dikembangkan dengan mengadopsi beberapa model penelitian sebelumnya. Dengan demikian, kebutuhan untuk kerangka kerja integrasi untuk mencapai tujuan ini ditunjukkan pada Gambar 1.

Gambar 1. Kerangka kerja teoritis integrasi KM & BI untuk mencapai daya saing

Pada tahap pertama metodologi adalah proses pengumpulan data, dimana para manajer diajukan beberapa pertanyaan yang berkaitan dengan pencapaian daya saing melalui KM dan penggunaan BI di dalamnya. Beberapa pertanyaannya adalah sebagai berikut:



. . .


Download Full Paper

Managing Knowledge Business Intelligence: A Cognitive Analytic Approach

Knowledge Management (KM) is a strategic tool for building Intellectual Capital (IC) information within an organization. Management using this tool as the most efficient mean for making people as most valuable assets. Efficiency within organisation is not possible unless organization practice BI on the right track. BI is closely associated with success of results generated by KM.An organization may have bulk of information which may arise problems when it comes to part of implementation. BI technologies play crucial rule for better management of such huge information. BI formulates smooth part for practitioners of KM to attain a competitive edge over its competitors. This competitive edge follows competitive personnel with better performance, work efficiency and better customer relationship management.The competitive nature of any organization is boosted up using work practices that involve a high degree of association. However, boosting skills within the organisations is not an easy task. It takes a while before skills cross a certain required threshold where tangible benefit can be obtained.

That is why the knowledge transfer is very important within the organisation especially to explicate the tacit knowledge so it can be learned by any entities in the future. The biggest problem for KM is in a part of people tacit knowledge. Conversely, tacit knowledge can disappear in case of mergers, reorganization’s, and downsizing. The contextual analytics which supported by a cognitive system is an advanced analytics system used to collect tacit knowledge. The contextual analytics techniques like relevancy ranking are used besides those like entity relation modeling, entity extraction, tagging of parts of speech, and so on. Thus, data is analyzed within a confined set of implicit and explicit knowledge. If implicit knowledge and various perspectives are included in this analysis, these contextual analytics may becomes cognitive analytics. This paper will explore the cognitive approach for analysing KM in BI environment.


Business Intelligence

Business Intelligence (BI) comprises important business process which collects and analyzes information for business decisions and actions. Particularly, it emphasizes upon use of information tools to enhance business performance. BI consist of technologies, processes and implications which allows acquiring, storing, retrieving and analyzing data for better decision making. On-Line Analytical Processing (OLAP) is a tool of BI which allows searching and testing relevant data along with computation and identification of relationships. Data Mining identifies trends, patterns and relationship among huge sum of data stored in Data Warehouse. It makes use of statistical and mathematical techniques along with technology. Decision Support System (DSS) is the association of man and machine for provision of authentic and useful information in order to support management in decision making.OLAP is one of the important components of BI used in process. OLAP has several other traditional forms. Some of them are classification, sequential patterns, regression and link analysis.Thus, BI process is a relevant approach to analysis knowledge data that required a proper process to capture and analysis tacit knowledge.

Knowledge Management

Knowledge Management (KM) is a technique of searching, acquiring, organizing and communicating
information and knowledge in organization. The knowledge can be implicit or explicit is relates to the
understanding of leadership, group efforts, individual experience and psyche of employees. Acquisition of relevant information is the process of identifying and capturing material closely associated with current goal. Retrieval of information is the second phase of KM process where organization takes out specific information from multiple sources. The captured knowledge of the organization will be process by using BI, and later by using cognitive approach the tacit knowledge will be used as part of analytic solution.

Cognitive Approach for Capturing Knowledge

Cognitive approach is able to record, analyze, remember, learn, and resolve the problem from the information that are available from the human knowledge and experiences. The current cognitive system also can perform the transferring of knowledge andused to be the best practices in data analysis industries. In these use cases, a cognitive system is designed to build a dialog between human and machine so that the best practices are learned by the system as opposed to traditional method that being programmed as a set of rules. As long as knowledge is probabilistic, it always be influenced by human and social factors, and required a cognitive way to be managed. Cognitive approach is suitable for the “more than one” hypotheses to be analyzed as it is a kind of decision support that allows people to explain new opportunities, which has an impact in a positive manner. Therefore, this paper will explain the used of cognitive approach for managing knowledge in BI environment.

METHODOLOGY

Qualitative research technique has been adopted for this paper. These qualitative techniques include the careful analysis of literature review of previous researches and proposed models of KM and BI. Theoretical framework of the research has also been driven from multiple models of previous researches. Knowledge management and business intelligence has the potential to strengthen the effectiveness and competitiveness of organizations. Thus there is a need of having a Business Intelligence integrated framework of BI and KM for achieving this goal is shownin Figure 1.


Figure 1. Theoretical framework of KM & BI integration to achieve competitiveness

In first phase of methodology is collecting data, where the managers were asked few questions relating to achievement of competitiveness through KM and use of BI in it. Some of the questions are as follow:

. . .


Download Full Paper

Pencapaian Keunggulan Daya Saing Melalui Knowledge Management dan Business Intelligence

Knowledge Management (KM) adalah alat strategis dalam membangun Intellectual Capital (IC) pada sebuah organisasi. Manajemen menjalankan KM sebagai alat yang paling efisien untuk menghasilkan aset yang paling berharga bagi manusia sebagai pengambil keputusan. Para pengambil keputusan membutuhkan informasi yang paling relevan pada saat yang tepat untuk pengambilan keputusan yang berguna. Efisiensi dalam mengambil keputusan dapat dilakukan pada praktik-praktik di dalam Business Intelligence (BI) di jalur yang benar. Dalam hal ini, BI erat kaitannya dengan kesuksesan yang dihasilkan oleh KM. Organisasi memiliki informasi yang mungkin memiliki masalah ketika sampai pada bagian pelaksanaan. Teknologi BI memiliki peranan penting dalam melakukan pengelolaan informasi yang lebih baik dalam skala yang lebih besar. BI mampu merumuskan bagian yang mulus bagi para praktisi KM untuk meraih keunggulan kompetitif dibandingkan pesaingnya. Daya saing ini mengikuti tenaga kompetitif dengan kinerja, efisiensi kerja, dan manajemen hubungan pelanggan yang lebih baik. Teknologi BI yang berdampak pada KM secara signifikan umumnya meliputi OLAP, DSS, dan Data Mining. Ini adalah alat BI yang strategis yang harus selaras dengan strategi organisasi secara keseluruhan. Tools atau alat ini dapat membangun hubungan dua arah di lingkungan kerja antar karyawan sambil menyebarkan informasi dalam organisasi. Karyawan dalam organisasi manapun ingin dipertimbangkan dalam setiap keputusan tunggal. Hal ini akan memberikan lebih banyak loyalitas dan komitmen personil untuk bekerja dalam lingkup organisasi.

Knowledge Management

KM didefinisikan sebagai teknik pencarian, perolehan, pengorganisasian dan komunikasi informasi dan pengetahuan untuk memotivasi karyawan. Hal ini berhubungan dengan pemahaman kepemimpinan, usaha kelompok, pengalaman individu dan jiwa karyawan. Akuisisi informasi yang relevan adalah proses mengidentifikasi dan menangkap materi yang terkait erat dengan tujuan saat ini. Pengambilan informasi adalah tahap kedua dalam proses KM dimana organisasi mengeluarkan informasi spesifik yang didapatkan dari berbagai sumber.

Business Intelligence

BI terdiri dari proses bisnis penting yang mengumpulkan dan menganalisis informasi untuk keputusan dan tindakan bisnis, terutama menekankan pada penggunaan alat informasi untuk meningkatkan kinerja bisnis. BI terdiri dari teknologi, proses, dan implikasi yang memungkinkan perolehan, penyimpanan, pengambilan, dan analisis data untuk pengambilan keputusan yang lebih baik. On-Line Analytical Processing (OLAP) adalah merupakan salah satu alat pendukung BI yang memungkinkan pencarian dan pengujian data yang relevan beserta perhitungan dan identifikasi hubungan. Data Mining pada BI berfungsi untuk mengidentifikasi tren, pola, dan hubungan antara sejumlah besar data yang tersimpan di Data Warehouse. Ini menggunakan teknik statistik dan matematis seiring dengan teknologi. Sistem Pendukung Keputusan atau SPK (Decision Support System - DSS) adalah hubungan atau asosiasi antara manusia dan mesin untuk penyediaan informasi yang otentik dan berguna untuk mendukung manajemen dalam pengambilan keputusan. OLAP merupakan salah satu komponen penting BI yang digunakan dalam prosesnya. OLAP juga memiliki beberapa bentuk tradisional lainnya. Beberapa di antaranya adalah klasifikasi, pola sekuensial, analisis regresi, dan link.

Kerangka teoritis integrasi KM dan BI untuk mencapai daya saing

KM dan BI memiliki potensi untuk memperkuat efektifitas dan daya saing dalam sebuah organisasi. Dengan demikian ada kebutuhan untuk memiliki kerangka kerja BI dan KM yang terintegrasi untuk mencapai tujuan ini.

DSS sering digunakan sebagai alat kunci pada BI saat manajemen memilih beberapa strategi untuk KM. OLAP dan data mining adalah teknik modern pada BI yang berperan penting dalam pengambilan data. Integrasi KM dengan BI akan menghasilkan modal intelektual yang lebih baik yang merupakan aset paling berharga. BI dan KM juga dinilai dari berbagai model. Model sangat membantu dalam mengkategorikan bagian data warehouse dan membangun teknik data mining. Berbagai unsur BI juga harus berhubungan dengan unsur KM agar bisa menghasilkan keunggulan kompetitif. BI tidak dipraktekkan sebagai trend dalam organisasi tapi sebagai kebutuhan. Organisasi mengintegrasikannya sebagai alat utama secara sekaligus dalam membuat keputusan yang strategis. 

Dalam artikel ini integrasi juga dievaluasi dalam menyatakan langkah-langkah pertukaran antara BI dan KM yang mencakup konteks pengambilan keputusan. Wang dan Wang (2008), menyatakan bahwa pekerja pengetahuan memiliki kolaborasi dan kerjasama yang kuat dengan alat TI untuk kinerja dan sosialisasi yang lebih baik. Ia membahas dengan logika bagaimana data mining, alat BI dan KM dapat menghasilkan hasil yang lebih baik. Demikian pula, pemrosesan analitik on-line (OLAP) juga merupakan alat yang membantu saat mengekstrak data dari gudang data. Alat pengelolaan pengetahuan ini juga bekerja sambil mencapai BI kolaboratif, karena kolaborasi jauh lebih efektif daripada persaingan. Eksekutif di perusahaan multinasional memiliki pandangan yang dekat terhadap perubahan pola global.

Faktor sosial dan ekonomi adalah yang paling penting untuk melakukan analisis yang terkait dengan strategi bisnis. Analisis kualitatif faktor lingkungan juga memungkinkan eksekutif membuat keputusan yang tepat. Mereka mempertimbangkan beberapa faktor dari lebih dari satu disiplin kehidupan sosial. Di sisi lain analisis kuantitatif membantu mengidentifikasi hasil nyata. Ini juga dapat menciptakan keterbatasan untuk hasil yang dihasilkan ketika sampai pada tahap implementasi.

...

Download Full Paper di Sini 

Demographic Spatial Data Management in Indonesia with the Approach of Geographic Information System Model

Introduction

More rapid population growth in a particular area will gradually cause complex problem to the society and its environment. Indonesia becomes the 4th rank of most populated country in the world. Based on the result of population census in 2013, the number of populations in Indonesia was 240.5 million people. It means that Indonesia can be included as a country with the biggest population number among other developing countries after China and India. If it is compared with the census in 2000, it shows population expansion in Indonesia with approximate value 1.98% per year.

Based on the projection result of the population, the number of populations in Indonesia in 2050 is predicted to reach 366 million people3. Based on the data from World Population Datasheet, here it is the table of most populated countries in the world and the future projection in 2050 Table 1.

The impact of over load population is closely related to the width of the occupied area in a particular country. Big population can trigger some problems, and it can become the asset of a country as well. The most prominent issue is that big population can be the most influential asset of a country if human resource quality of the population is high. Although Indonesia becomes the 4th rank of its population number, Indonesia is in the 121st position in the world of its human resource quality (year 2014). Indonesia is still far left behind from China which has the highest number of population in the world, with its high quality population. The problem of population quality should be the government’s concern in handling the most prominent factor of prosperity and living quality to all citizens. Astronomically, Indonesia is located in 94˚ 45’ EL until 141˚ 05’ EL and 6˚ 08’ NL until 11˚ 15’ SL, in which equator area 1˚ is equivalent with 111 km. It means that Indonesian extends ±7,700,000 km2 with its land total area ±1,826,440 km2, and it is divided into 34 provinces. As the fourth country with the biggest population of the world with the population number ±238,452,952 people in the middle of 2015, the average population in every 1 km2 in Indonesia was occupied around 131 people /km2. Of course, a system to ease periodical monitoring about demography other than using census is significantly needed.

The width of Indonesia area in the map of population distribution seems uneven in 34 provinces. Based on the census result in 2010, there was 60% population occupying Java Island. However, Java Island is only 7% from the total area of Indonesia. On the other hand, Kalimantan Island which has bigger area was only occupied by 5% of Indonesia total population. Here they are some demographic problems in Indonesia:
  1. Problem of Total Fertility Rate (TFR). The increase of fertility rate will be the government’s burden in accommodating physical aspects like health facilities rather than its intellectual aspect. The increase of fertility will cause high rate of population improvement in developing countries that will negatively correlate to the prosperity of the population.
  2. Problem of Mortality Rate (MR). The high rate of life expectancy of the population requires bigger role of the government to provide any shelter facilities.
  3. Problem of Population Composition (PC). Indonesia has imbalance population composition that can cause new population problems.
By the existence of those problems, the researchers were motivated to conduct a study about demographic data management in Indonesia with the approach of geographic information system (GIS) model. Although the discussion related to demographic data management has widely been discussed in some other researches, the focus of the study, however, is to emphasize on demographic data management as a device of data monitoring and projection of population density with the approach of GIS model in order to control the population. The model of the system is expected to have a particular strength in monitoring demographic data and its control in every provincial area in Indonesia.

Proposed Method

The study was conducted to obtain a system that can be used to monitor the demographic data by using GIS model approach. The study was divided into three steps, as following:
  1. Spatial data and demographic data initiation.
  2. Spatial and non spatial data integration. It is the step in correlating spatial data and demographic data into the database.
  3. Indonesian demographic data visualization.
The system was designed as user friendly as it is expected by common people toward Information Technology (IT) to be able to access demographic data through web Figure 1.

Demography Theatrical

Some related researches have been done like the research who investigated about map making process by using Scalable Vector Graphic (SVG). In their study emphasize on SVG technology as the visualization of area mapping. In its development, SVG has become programming language to build interesting sites. SVG is a web graphic file format to present the graphics and to describe 2 dimension pictures base on eXtensible Markup Language (XML).

Another study investigated demographic problems in Indonesia. The focus of the study is demographical problem faced by the government as well as the impact of population nationally. Another problem analyzed is about employment showing that 77% employees in Indonesia are still in low education level. The impact toward per capita income will significantly influence toward the citizens’ living quality. Other demographical features also become the concern of the study such as the rate of divorce and marriage that will influence on fertility and mortality rate that can be the indicator as a country’s prosperity. The indicators of prosperity in a country can be significantly influenced by several factors such as the rate of fertility and mortality as they are noted by Statistical Bureau. In simple way it can be explained that people are the subjects as well as the objects of development. Thus, if there is no initial anticipation, it will cause national imbalance. In further, based on the literature review presented above monitoring is importantly needed toward population development in order to keep the balance of the population and the suitability of government’s program to reach national prosperity by using geographical information system that will be developed further.

Demography is a scientific study related to demographical number, population spread and composition as well as how those factors change from time to time. Demographic science can be in the form of quantitative organd qualitative data. Quantitative demography mostly uses statistical numbers and mathematical number. On the other hand, qualitative demography explains demographic aspects within the method of analytical description. In addition, demographic studies examine the development, phenomena, and problems related to demography and the social situation around its environment systematically. Demographic science that needs our attention concerns more to inter discipline studies integrated with demographic analysis that people may know as social demography. There are several opinions mentioning about the definition of demography:
  1. It is a science studying population in any particular area within its number, structure (composition) and development (change).
  2. It is a science examining the number, distribution, territorial, population composition, and the change as well as the causes that usually appear because of the rate of fertility, mortality, migration, and social mobility.
  3. It is a mathematical and statistical studies toward numbers, composition, spatial distribution of the population, and the change of the previous aspects that always happen as the impact of fertility, mortality, marriage, migration, and social mobility. 
Three important aspects in studying demography such as fertility, mortality, and migration as it can be seen in Figure 2. In addition, there are two supporting aspects in demography; those are social mobility and the rate of marriage. The data of population number can be obtained from these several ways: 
  1. Population census. It is a whole process from gathering, processing, presenting, and assessing demographic data that relate to the characters of demography, social economy, and environment.
  2. Registration of the population is the process of population data recording conducted by individual party when there is population change. It is done by domestic affair ministry through local village offices.
  3. Population survey is the process of information recording related to the population based on the specialty of wider and deeper studies.
The example is mobility survey of Yogyakarta citizens, and fertility survey of Yogyakarta citizens. Population survey was done because population census and registration have limitation and weakness. Demographic information can be obtained through census. In addition, the data used in the study is secondary data from Statistic Bureau as a simulation. The spatial data of Indonesian area is adopted from Google Maps API from www.google.com.

Result and Discussion

...