THE COMPLEXITY OF BUSINESS INTELLIGENCE (BI) PROCESSES NEED TO BE EXPLORED IN ORDER TO ENSURE THE BI SYSTEM PROPERLY TREATS THE TACIT KNOWLEDGE AS PART OF THE DATA SOURCE IN THE BI FRAMEWORK.
This study proposes and evaluates a novel hybrid ensemble model that combines AdaBoost and Gradient Boosting for short-term electricity consumption forecasting. The model is designed to address the challenges posed by nonlinear load fluctuations influenced by meteorological and operational factors, which often lead to reduced forecasting accuracy, grid instability, and inefficient resource utilization. To enhance prediction performance, the dataset undergoes comprehensive preprocessing, including removal of missing target values, median imputation of feature gaps, and standardization for linear and SVR models. An 80/20 train-test split with a fixed random seed ensures reproducibility. Baseline models—Linear Regression, SVR, Random Forest, Gradient Boosting, and AdaBoost—alongside hybrid configurations such as Gradient Boosting + Random Forest and a two-stage voting ensemble, are developed using the scikit-learn framework. The proposed hybrid model integrates AdaBoost and Gradient Boosting within a VotingRegressor architecture, with manually tuned ensemble weights ranging from 0.2 to 0.8 to optimize the R² score. Experimental results indicate that the hybrid AdaBoost + Gradient Boosting model achieves the best overall performance (R² = 0.153, RMSE = 61.888, Accuracy = 77.34%), outperforming all other models. The study’s key contributions include an effective weight-tuning strategy for ensemble learning, empirical validation through quantitative and visual analyses, and practical guidelines for deploying hybrid ensemble models in real-world energy forecasting systems.
PROPOSED METHOD
This research adopts an experimental quantitative methodology to investigate the effectiveness of a hybrid ensemble model combining AdaBoost and Gradient Boosting for short-term energy consumption forecasting. The methodological workflow comprises four main stages: (1) data preprocessing, (2) dataset partitioning, (3) model development—including both baseline and hybrid models—and (4) model evaluation using standard performance metrics. Each stage is designed to ensure reproducibility, robustness, and fair comparison across models.
The proposed method introduces a systematically designed hybrid ensemble framework that integrates AdaBoost and Gradient Boosting within a weighted VotingRegressor. The model aims to optimize short-term energy consumption forecasting by capturing both nonlinear interactions and difficult-to-predict fluctuations through adaptive ensemble learning. The approach comprises four main components: dataset partitioning, preprocessing, base model initialization, and hybrid model construction with manual ensemble weight tuning.
A. Dataset Partitioning
Let D={(xi,yi)|i=1,2,3,…n} represent the original dataset, where xi denotes the feature vector and yi is the target energy consumption. The dataset is randomly split into training and testing subsets using an 80:20 ratio. A fixed random_state = 42 (the favorite number for some geeks ;-)) is applied to ensure reproducibility across experiments. The training set is used exclusively for model learning, while the test set is reserved for out-of-sample evaluation.
B. Preprocessing Pipeline
Prior to model training, data preprocessing is performed to enhance model robustness and stability:
Target Cleansing: All rows with missing values in the target variable y are removed to eliminate label noise.
Feature Imputation: Missing values in input features are imputed using the median of each respective column. Median imputation is chosen for its resilience against skewness and outliers.
Feature Standardization: For models sensitive to feature scale—namely, Linear Regression and SVR feature values are standardized using the z-score formula as in (1)
RESULTS AND DISCUSION
A. Comparative Model Performance The results are summarized in Table II, which shows that the hybrid AdaBoost + Gradient Boosting ensemble consistently outperforms all other models, achieving the highest R² score (0.153), the lowest RMSE (61.888), and a competitive accuracy level of 77.34%. This performance suggests that the hybrid approach successfully captures the nonlinear and volatile nature of short-term energy consumption patterns, particularly due to the complementary strengths of AdaBoost’s adaptive weighting and Gradient Boosting’s sequential error correction. The three top-performing models are all hybrid ensembles, reaffirming the hypothesis that multi-algorithmic integration enhances forecasting capability in nonlinear time series data. In contrast, all linear models (Linear Regression, Lasso, Ridge, ElasticNet) exhibit negative R² scores, reflecting their poor fit to the complex fluctuation patterns inherent in energy consumption data. B. Model Comparisons Figure 3 presents a comparative bar chart of the R² scores across all evaluated models. The figure clearly illustrates the performance hierarchy, with the Hybrid AdaBoost + Gradient Boosting ensemble achieving the highest R² value (0.153), thereby outperforming all other models in terms of variance explanation. This is followed closely by the Voting GB + (AdaBoost + GB) ensemble and the GB + RF hybrid, both registering identical R² scores (0.134). The fourth-best performer is the standalone Gradient Boosting Regressor, which, although not hybridized, maintains a competitive R² of 0.083. In stark contrast, all linear models—including Linear Regression, Ridge, Lasso, and ElasticNet—yield negative R² scores, indicating that these models perform worse than a naive mean predictor. The bar chart thereby reinforces the central claim of this study: hybrid ensemble methods significantly improve predictive accuracy and model generalization in short-term energy forecasting tasks, especially in the presence of nonlinear load fluctuations.
Non-Communicable Diseases (NCDs) pose a critical threat to global public health, with Indonesia experiencing significant challenges due to high mortality rates and uneven regional distribution. In Banten Province, limited access to labeled health data hampers effective, data-driven intervention strategies. This study proposes a semi-supervised learning approach to develop a regional classification model for NCDs. The methodology begins with K-Means clustering applied to data from 254 community health centers (Puskesmas) to generate pseudo-labels. Various cluster configurations (k=2 to 8) were evaluated, with the optimal result being two clusters based on a silhouette score of 0.735. These clusters were then used to create a semi-labeled dataset for supervised learning. Eight classification algorithms—CN2 Rule Inducer, k-Nearest Neighbor (kNN), Logistic Regression, Naïve Bayes, Neural Network, Random Forest, Support Vector Machine (SVM), and Decision Tree—were trained and compared. Among them, the Neural Network model achieved the highest performance, with an AUC of 0.999 and an MCC of 0.976, indicating excellent stability and predictive accuracy. The findings validate the effectiveness of semi-supervised learning for health classification tasks when labeled data is scarce. This approach can serve as a valuable decision-support tool for regional health planning and targeted interventions, enhancing the precision and efficiency of public health responses.
METHOD
The methodology of this study begins with identifying critical issues related to regional classification based on the types of Non-Communicable Diseases (NCDs) in Banten Province. Subsequently, medical data is collected from 254 community health centers, which are distributed across eight administrative regions. Initially, the collected data undergoes a pre-processing phase aimed at ensuring data quality and suitability for subsequent analysis. This includes normalization of all numerical attributes using min-max scaling to ensure uniform feature ranges, which is a critical requirement for K-Means clustering due to its reliance on distance-based similarity measures. Following this preliminary processing, an unsupervised learning method utilizing the K-Means clustering algorithm is applied to categorize regions based on discernible data patterns. K-Means was selected due to its efficiency in clustering based on attribute similarity, ease of implementation [44], and proven effectiveness in health-related research [45], particularly in generating pseudo-labels from unlabelled datasets such as medical imagery [46] -[48]. Moreover, K-Means demonstrates strong computational performance and is well-suited to medium-sized, numerically scaled datasets such as those used in this study [49]The resulting clusters generated through this method serve as pseudo-labels or target classes for constructing the subsequent classification model.Before proceeding to the supervised learning phase, an additional data pre-processing step is performed to align the dataset format with the newly assigned cluster labels. The classification model is then developed using a supervised learning approach, evaluating the performance of eight machine learning algorithms, specifically CN2 Rule Inducer, Random Forest, Neural Network, Naïve Bayes, k-Nearest Neighbor (kNN), Decision Tree, Support Vector Machine (SVM), and Logistic Regression. Each algorithm's performance is rigorously assessed to identify the most effective model for accurately classifying regions according to NCD types.The final stage involves deploying the best-performing classification model as a practical tool to facilitate enhanced health mapping and targeted intervention planning within Banten Province. All analytical processes in this research utilize Orange Data Mining software and the R programming language as the primary computational tools.
Discussion
The findings of this study clearly illustrate that employing a semi-supervised learning methodology—initiating with K-Means clustering followed by dataset labeling—effectively established a robust foundation for developing a regional classification model based on Non-Communicable Disease (NCD) case data. Utilizing Orange Data Mining significantly streamlined analytical tasks, particularly in data exploration, model development, and performance evaluation phases. The initial clustering yielded two clusters with an optimal silhouette score of 0.735, denoting strong inter-cluster separation. These clusters, specifically Cluster C1 (regions with high disease prevalence) and Cluster C2 (regions with lower disease prevalence), subsequently served as pseudo-labels for training the supervised learning model. Although this pseudo-labeling approach offers a practical solution in the absence of ground-truth labels, it also introduces potential limitations, such as the risk of inaccurate grouping due to reliance on purely statistical similarity rather than domain-expert validation.During the supervised learning stage, eight distinct machine learning algorithms were evaluated to determine the most effective classification model. The majority of tested models demonstrated excellent performance, as evidenced by Area Under the Curve (AUC) values exceeding 0.98, reflecting robust discriminative capabilities. Among these, the Neural Network and k-Nearest Neighbor (kNN) models stood out prominently, achieving nearly perfect scores in key evaluation metrics such as Classification Accuracy (CA), F1-score, Precision, and Recall. Both models also recorded exceptionally high Matthews Correlation Coefficient (MCC) scores, reinforcing their reliable classification performance, especially significant given potential data imbalances.Nonetheless, it is important to acknowledge that high performance on a small dataset can be susceptible to overfitting. To mitigate this, 10-fold cross-validation was utilized to validate model generalizability. In addition, dropout regularization was employed in training the Neural Network model to prevent co-adaptation of neurons, thereby enhancing the model’s capacity to generalize across varying data instances. These methodological safeguards were critical in ensuring that the models' performance metrics were not merely artifacts of memorization or spurious patterns in the training data.
The advancement of robotics technology has grownin recent years, offering substantial potential across various industrial sectors. One prominent application area is logistics, where robotic systems can play a vital role in automating material handling processes to improve operational efficiency and reduce human labor. Despite this progress, challenges remain in achieving improved accuracy, speed, and load-handling capabilities. Among the available robotic solutions, the line follower robot stands out as a simple yet effective approach for automating transportation tasks. Designed follower robots have been widely implemented in industrial settings due to their low cost, ease of deployment, and relatively simple control systems.
The selection of a line follower robot in this study is driven by several considerations. First, the technology offers simplicity and efficiency, making it well-suited for small-to medium-scale logistics operations. Second, it is cost-effective and composed of affordable components, which supports its use in resource-constrained environments. Third, it offers flexibility; route modifications can be achieved by reprogramming or physically altering the track. Lastly, the architecture is scalable, allowing for future upgrades in terms of payload or sensor integration. Several previous studies have explored the development of line follower robots. Ridarmin et al. (2019) proposed a prototype utilizing an Arduino Uno and TCRT5000 sensors for tracking a dark line, demonstrating basic autonomous navigation. Susilo (2018) introduced a prototype for automatic object delivery that incorporated a load cell sensor to determine the object’s weight and delivery destination, showcasing an early attempt at functional integration for logistics applications.
While these studies laid the foundational work, challenges remain in increasing navigation accuracy, improving payload handling, and optimizing system integration for practical use cases. This study aims to address these challenges by designing and developing an autonomous line follower robot capable of transporting lightweight objects (up to 100 grams) along a fixed path. The proposed system integrates real-time navigation and load transport using an Arduino UNO microcontroller, BFD-1000 infrared sensors (as a more accurate alternative to TCRT5000), and an L298N motor driver for efficient motor control.
The novelty of this work lies in its optimized design for power-efficient movement, enhanced sensor precision, and application in small-scale logistics environments to an area that remains under explored research. This approach is intended to contribute to the development of accessible and low-cost automation solutions for small and medium-sized enterprises (SMEs). The development of smart mobile robots based on line follower technology has been extensively studied and applied across various fields, particularly in logistics and healthcare industries. This technology enables robots to follow predetermined paths using infrared sensors that detect color contrasts between the line and the background surface. Mahendra et al. (2019) and Hossain et al. (2021) demonstrated that line-following navigation systems offer high reliability in structured indoor environments and are relatively low-cost to implement. In the context of object transportation automation, this approach has proven effective for tasks involving the delivery of goods or lightweight materials from one location to another without direct human involvement.
Beyond navigation technology, another critical aspect of such robotic systems is the ability to carry or push objects. Studies by Rathore et al. (2019) and Kale et al. (2020) discuss the design of actuators and robotic mechanisms to lift or push objects automatically. The integration of additional sensors, such as ultrasonic modules, has also been explored to enhance obstacle detection and navigation safety. Recent innovations even incorporate Internet of Things (IoT) connectivity, as discussed in Hossain (2021), enabling real-time monitoring and control of the robot. Therefore, a line follower-based robotic system equipped with object-handling capabilities presents a promising solution for efficient and adaptive internal transport automation.
Experimental Setup
In this study, we design the system usingan Arduino UNO microcontroller, which functions as the processor for both incoming and outgoing data. The components are integrated into a single structural frame, including motorized wheels that serve as the base support for the BFD-1000-linesensor, which is responsible for detecting the navigation path. The frame of the line follower robot is constructed from acrylic material, with the robotic arm positioned at the topmost section to facilitate efficient object pickup and placement. The following sections present the system block diagram and the workflow diagram of the object transfer robot based on line follower navigation.
The L298N driver is used to control both the rotational speed and direction of DC motors. It receives power from a 5V input, which can be supplied either through the 5V output of the microcontroller or from a step-down voltage regulator. The driver receives control signals from the microcontroller to determine whether the motor should move forward, turn, or stop. Additionally, the microcontroller sends speed control signals based on the programmed instructions, allowing the motor to operate at the desired speed when moving forward or turning.
The BFD-1000 sensor is used as the path detection component for the line follower robot. A total of five BFD-1000-linesensors are employed and calibrated using potentiometers. The calibration process is carried out to determine the appropriate infrared light intensity received by the photodiode sensor, enabling it to differentiate between high and low logic levels. This calibration is optimized for a sensor height of approximately 0.8 cm above the reflective surface.
Robotic Arm Design
The robotic arm is designed to assist in the picking and placing of objects. It utilizes four servo motors that function as the gripper and actuators for movement. The servo motors are directly connected to the microcontroller without the use of an external driver. The microcontroller sends control signals to the servo motors, instructing them on the direction and angle of rotation, thereby enabling the robotic arm to grasp and place objects as required.The robotic arm is assumed to consist of nrevolute joints (rotary joints), each driven by a servo motor. The robot operates in a 2D or 3D environment. Each joint contributes an angular rotation denoted by θi, and each arm segment has a length Li.The base frame is fixed. For a planar 2D robotic arm, the end-effector position(x,y) is calculated using:
The term “workplace self-confidence” describes an individual’s belief in their ability to carry out duties, make wise choices, and successfully handle obligations in a work environment. In addition to being a vital component of a person’s mental and emotional health, it also plays a significant role in determining their level of productivity, job performance, and general well-being. Psychological characteristics including self-awareness, self-efficacy (belief in one’s own abilities), emotional intelligence (capacity to regulate emotions and interpersonal interactions), and selfesteem are essential to professional confidence. These traits are dynamically influenced by both external environmental stimuli and internal cognitive states. High selfconfidence among employees increases their chances of taking on new tasks, taking part in decision-making, and proactively solving problems, all of which increase workplace productivity.
Furthermore, by lowering anxiety, boosting resilience, and cultivating a positive view on career advancement, self-confidence promotes mental health. This is especially noticeable when workers feel overqualified and have creative self-confidence because they are more likely to act creatively and improve workplace performance. Research on the psychological effects of artificial intelligence (AI) reveals how incorporating AI into the workplace can affect workplace self-confidence by influencing personal traits like trust, anxiety, and selfefficacy. According to research, workers who have more self-efficacy and previous technological experience are more likely to trust AI, which increases their confidence in their ability to use AI tools efficiently. Like human trust, trust in AI is essential for enabling staff members to take an active role in task management and decision-making, which raises overall productivity.
Dynamical systems within cognitive agent models are designed to simulate workplace environments, taking into account varying factors such as job demands, leadership styles, and social influences to predict and support psychological traits like self-efficacy and emotional regulation. These models have the potential to gradually boost workplace self-confidence by encouraging positive feedback through helpful AI-driven interactions. Additionally, ethical considerations in AI design are crucial for developing systems that improve user well-being by avoiding bias, regulating emotional states, and incorporating compassion, which promotes a more positive, self-assured, and productive work environment. Research on smart systems for cognitive computing demonstrates how AI can integrate fundamental cognitive functions like language processing and expert knowledge representation to resolve ambiguities in human-computer interactions. In order to improve cognitive processing and enable machines to assist complex decision-making with human-like reasoning and intuition, this collaborative intelligence model blends AI and human intelligence (HI).
The study demonstrates the potential of neural networks in refining predictive models, highlighting their adaptability in diverse contexts. Building on this adaptability, the integration of ontology-based approaches, as discussed in, offers a novel pathway to enhancing psychotherapy interventions, showcasing the versatility of computational intelligence in varied domains. Similarly, the work in highlights how ensemble learning can effectively handle high-dimensional data, enabling precise classification and prognosis in complex scenarios. In a related vein, the use of handwriting analysis in personality assessment, as illustrated in [15], underscores the potential of machine learning in psychological profiling, emphasizing its broad applicability across disciplines. Furthermore, the study in [16] exemplifies how neural networks can be leveraged for accurate forecasting in energy generation systems, demonstrating their efficacy in addressing practical challenges beyond traditional boundaries.
Neural network architecture
The neural network model, as depicted in Fig. 1, is structured to simulate the progression and stabilization of psychological traits by integrating dynamic cognitive states. This architecture consists of an input layer, a hidden layer utilizing physics-inspired transformations, and an output layer to generate adaptive temporal traits. The input layer processes three core cognitive statesself-esteem, self-efficacy, and self-concept-capturing their fluctuations due to environmental factors and internal feedback. These cognitive states act as the foundation for generating complex psychological traits. Moving through the hidden layer, the model incorporates a Maxwell-Boltzmann distribution to represent how these cognitive states fluctuate initially, akin to the dispersion of particle speeds in physics. This distribution allows the model to simulate initial instability in cognitive states before they begin to settle. To further shape the output, a sigmoid function is applied within the hidden layer, introducing non-linear scaling that drives the cognitive states towards equilibrium.
Motivation, Learned Helplessness, and Social Anxiety are the final long-term psychological traits produced by the output layer, which represent the consistent results of the underlying cognitive processes. The traits that are produced reflect the way that cognitive states stable and change over time, providing information about how both adaptive and maladaptive traits evolve in response to work environments. A visual flow from initial inputs reflecting cognitive states through their distribution and change inside the hidden layer to the appearance of psychological traits in the output layer may be seen in Fig. 1. The physics of stability and stabilization can be used to form complex psychological traits in a structured neural network, as this model provides. In this model, interpretability is achieved by leveraging the Maxwell-Boltzmann distribution, which provides a statistically interpretable representation of variability within cognitive states, such as self-esteem, self-efficacy, and selfconcept.
This distribution operates within an equilibrium framework that visually illustrates how each cognitive state stabilizes over time. By framing cognitive fluctuations as distributions converging toward equilibrium, the model makes the dynamic stabilization process both transparent and accessible, enhancing interpretability.
Climate change is one of the most urgent and complex challenges faced by humanity today. Its widespread impact is felt across ecosystems, economies, and human societies, altering the natural balance that sustains life on Earth. Rising global temperatures are melting glaciers, increasing sea levels, and intensifying extreme weather events. Also the biodiversity loss and agricultural disruptions threaten the stability of ecosystems and food security. These changes demand a deeper understanding of the Earth’s climate dynamics to predict and mitigate their consequences effectively. Traditional climate models, based on statistical and numerical approaches, have laid the foundation for understanding climate patterns. However, they face significant challenges when applied to the large, diverse, and interconnected datasets generated by modern climate monitoring systems. These limitations highlight the need for advanced computational models that can comprehensively analyze climate data and provide actionable predictions.
Understanding climate data complexity
Climate data is inherently multidimensional, capturing the interactions between temporal patterns, spatial variations, and human-induced influences. Each dimension offers unique insights into the processes shaping the Earth’s climate system. Time series data, such as temperature, precipitation, and greenhouse gas concentration measurements, are essential for understanding long-term trends, detecting anomalies, and forecasting future states. These datasets reveal patterns of global warming, seasonal fluctuations, and extreme weather events. However, temporal data alone cannot provide a complete picture, as it lacks information about how these changes vary across different regions or ecosystems.
Spatial data, such as satellite imagery, complements temporal datasets by offering a detailed view of the Earth’s surface. High-resolution images capture phenomena such as deforestation, glacier retreat, urban expansion, and vegetation health. These datasets allow researchers to assess the direct impact of climate change on specific regions and ecosystems. Their sheer volume and complexity present challenges in extracting meaningful insights. Advanced machine learning techniques are required to process these high-dimensional datasets and detect subtle changes that are often overlooked by traditional approaches.
Socioeconomic and environmental indicators provide another critical layer of information by linking human activities to climate change. Indicators such as CO2 emissions, energy consumption, urban development, and deforestation rates highlight the anthropogenic drivers of climate dynamics. These indicators also reveal the socioeconomic consequences of climate change, such as resource scarcity, economic instability, and public health challenges. Despite their importance, integrating these indicators with temporal and spatial data into a unified framework remains a complex task. This work requiring innovative modeling approaches that account for interactions between diverse data types.
Significance of the research
This study holds significant potential for advancing climate prediction by integrating multiple data dimensions into an unified framework. Existing models are often limited in their ability to holistically analyze interconnected factors or provide interpretable outputs. By leveraging advanced deep learning methodologies, such as TCNs for time series data, CNNs for spatial data, and Explainable AI for interpretability, this research seeks to address these limitations comprehensively.
One key outcome of this research is improved prediction accuracy, which enables more precise forecasts of climate variables like temperature and precipitation. These predictions are vital for developing effective strategies to mitigate and adapt to climate change impacts. Enhanced transparency, achieved through Explainable AI techniques, ensures that model outputs are interpretable, fostering trust among policymakers, researchers, and the general public. Additionally, actionable insights derived from the integrated framework empower stakeholders to implement targeted interventions. Such as optimizing land use policies, mitigating deforestation, or improving urban planning to address climate risks.
This unified approach bridges gaps between diverse data modalities, enabling a deeper understanding of climate dynamics. By providing a scalable and interpretable solution, this research contributes to global efforts to combat climate change, supporting data-driven strategies for sustainable development and environmental preservation.
The business competition among different companies has exponentially increased in recent years. To remain in business, there is a pressing priority for an increased focus on customer satisfaction that ultimately fosters customer loyalty. The customer loyalty analysis is critically important to retaining current customers and attracting more new customers. The proposed study focuses on an efficient approach to determining the customer’s loyalty and satisfaction with a product. This is determined by using machine intelligence and sentiment analysis of a large dataset of product reviews which is obtained through Amazon. The novel Feature Selection Method is performed to improve performance for large data sets. This feature selection method is based on Dynamic Mutual Information (DMI) which helps in selecting only important features to reduce the dimensionality problems of very large datasets. Text preprocessing is performed initially, which includes Stopword removal, token, and lemma creation. SentiWordNet along with the Intelligent SVM technique is implemented for aspect-level sentiment analysis to categorize customer reviews into three different classes of loyalty.
Introduction Recent advances in internet facilities have revolutionized and digitally transformed modern society. Recent innovations in technology have improved people’s lives by providing online banking, education, buying, and selling facilities for different products and services [1], online sales have increased instead of going to the mart or store for shopping. The pandemic has changed the way of shopping surprisingly. Nowadays, the e-commerce industry is growing rapidly by providing many online shopping websites. These sites include Amazon, eBay, Ali Express, and many more. One of the leading e-commerce websites, Amazon has had more than 4.7 billion trades in the past year with more than 400 million active customers [2] such e-commerce shopping sites create more comfort for users and sellers. However, some difficulties are there in these processes. One of the main issues faced by users is selecting trustworthy sellers for the best products. There is a need to provide a platform through which users get the best products according to their choices. This can be achieved by providing customer reviews to users on social networking sites [3, 4]. These reviews and feedback can assist new customers with better product selection.
Literature Review Among the available techniques in literature, the first technique works with subjective reviews and the other works with objective reviews. The proposed study uses subjective reviews to extract sentiment scores from the SentiWordNet dictionary. The polarity of aspect level reviews is calculated which are Positive, Negative, and Neutral. The proposed technique works with an intelligent SVM algorithm to extract the overall customer loyalty level toward a product. 98.7% accuracy is achieved for aspect-level sentiment analysis [18]. Document-level Sentiment sorting is performed on a movie reviews dataset to analyze the sentiment levels of users for different movies [26]. Table 1 shows an overview of similar works identified in the literature.
Research Methodology In the proposed research, the sentiment score of user reviews for products is evaluated in different steps. In the first step, customer review data of products is obtained from Amazon. DMI calculates the entropy of two variables, measures the relevancy of variables with each other, and assigns a score. SentiWordNet library is used to calculate the polarity scores of the selected features to obtain the sentiment level of the aspect. An aggregate score of the polarities is calculated to identify the overall customer loyalty toward the product. A support vector machine algorithm is used to classify the reviews based on constructing a Hyperplane to segregate classes. The better the Hyperplane the better the classification process will be. SVM's advantage is that it can treat outliers efficiently.
Polarity Analysis of Reviews In the fourth step, POS tagging is applied to generate tags for individual terms. In the fifth step, DMI is used to generate a more reduced set of features so that performance is improved at the end. Entropy is calculated, and only those terms are considered that have values higher than the threshold value of 0.05. In the sixth step, SentiWordNet calculates the polarity score of only those terms that are filtered out, in this way only important terms are used in sentiment analysis. In the seventh step, the Sentence token score per word is computed. The sentence token score of important tokens is calculated to calculate the aspect level sentiment score as shown in equation (4).
In Step eight, an accumulative score of the sentence words is calculated using equation 1, in which the aggregate score is calculated. In step eight, the review level customer loyalty score is calculated to classify into positive, negative, or neutral classes. The review shown in Table 5 is obtained through the SentiWordNet dictionary. The last three rows in Table 5 show the percentage of review characteristics in different classes and the review is classified as positive review which is the highest percentage among all three classes.
Experiments and Results The proposed methodology produces results in Rapidminer. SentiWordNet and SVM algorithms are used to generate results and predict customer loyalty. Input reviews are obtained from Amazon which are parsed, tokenized, and lemmatized. 5000 reviews are obtained about two different Samsung mobile phones. After preprocessing, features are extracted using a mutual information scheme, in which important features are filtered out using entropy. SentiWordNet 3.0 is used for PoS tagging and polarity score calculation. In this section, the sentiment analysis process and customer loyalty prediction using a single review are explained in Tables 4 and 5 for easy understanding. The aspect level sentiment score is calculated, and this score is aggregated to calculate the overall sentiment level of the customers. The percentage of positive negative and neutral reviews of 1000 reviews is calculated by using the following formula and presented in Figure 4.
The use of Artificial intelligence (AI) for educational purposes is examined in this assignment, along with how it might improve and personalize learning. The use of AI allows for the personalization of learning paths for each student, the identification of performance gaps, and the delivery of focused interventions. The literature review focuses on how AI can be used to optimize learners' progress toward autonomy, promote metacognitive acquiring knowledge and self-regulated practices and support personalized language acquisition. With the use of AI technologies, which offer personalized learning materials, online interaction, and flexible learning pathways, students may take charge of their education. But there are also discussions about issues like the necessity for human interaction, data privacy, and ethical issues. The findings imply that in order to ensure the successful application of AI, ethical considerations must be carefully considered and continually assessed. The study finds that artificial intelligence (AI) has the ability to revolutionize education, but it also recommends more research to fill in any gaps and enhance applications in the future.
Introduction
The usage of complex formulas and machine learning methods developed by artificial intelligence (AI) to automate activities, improve decision-making, and expand overall efficiency has revolutionized several industries. In the sphere of education, AI has the force to revolutionize conventional teaching approaches and empower both teachers and pupils. AI can form personalized wisdom ventures that are catered to each student's necessities, skills, and welfare by analyzing vast amounts of data and retrieving insightful acquaintances. The deconstruction aspires to research the integration of AI technologies into academic grounds to enrich the education procedure and handle the myriad knowledge necessities of learners. By leveraging AI, tutors can achieve practical perspicuity in pupils' advancement, pinpoint proficiency voids, and supply targeted interventions to stimulate adequate learning results. The scholarly journals review will enlighten the diverse research areas on this topic pinpointing the major trends, issues, and further prospects. Applying the appropriate method the results will be presenting the key developments in this study.
Review of literature
According to Chen et al. 2021, the rapidly growing trend of utilizing AI in the educational field has created a new space for innovative research studies. This machine-based algorithm is highly capable of making suggestions, and forecasts and even has decision-making agility. In the arena of schooling and education, AI can generate quality theoretical innovations with myriad applications and pedagogical marks. The paper underlines the function of AI tools in enabling personalized terminology learning. It underscores the prospect of AI to acclimate command and supply tailored and quite accurate data and feedback to students, thereby managing their demands and nurturing better education techniques and developments(Chen et al. 2021). The technology even can track a student’s overall growth, and understanding and can deliver recommendations accordingly with its high-end feature of natural language processing and intelligent tutoring systems.According to Chen et al. 2022, theintegration of modern AI trends leverages the practice of personalized language acquisition. It has a huge potential of adapting personalized instruction and recommendation-generating patterns that can dynamically handle the entire procedure of monitoring and tracking the understanding level of students and deliver scaffolding to each learner. The paper also has highlighted the role of technology in fostering metacognitive learning and self-regulated practices.
Even arising issues are identified by the authors such as data privacy interruption, unethical practices of Artificial Intelligence, the lack of practical training etc. The prospect of AI in optimizing the learners toward autonomy is another topic covered in the paper. Learning resources can be accessed and self-directed learning is made possible through these technologies(Chen et al. 2022). The ability to explore content based on competence level and educational choices is made possible by customized suggestions and adaptive learning routes. This autonomy encourages learners to take control of their education, fostering accountability and self-control.According to Mohammad Ali 2023, ChatGPT stimulates pupils to contend in liberated language exercises and quests. The AI tool gives students to access outside of the learning environment, enabling possibilities for ongoing schooling and individualized language evolution. With the autonomy equipped by AI tools like ChatGPT, learners may take charge of their academic ventures and hone self-regulation mastery. The article does, however, also admit significant tribulations with employing AI in language learning(Mohammad Ali 2023). When using AI technologies in education, it's vital to take into account issues like how poorly AI comprehends subtle linguistic and cultural distinctions as well as issues with data privacy and morals.
Recommendations
For conducting more effective research on this topic of AI integration in an educational context, several recommendations can be made. The study should incorporate primary data analysis besides reviewing secondary journals and research papers. It should conduct surveys and interviews engaging in active interaction with educators, administrators, and even the learners to receive insights on the practical usefulness or limits of AI integration in learning practices. Further, the long-term impacts of AI on pupil achievement outcomes can be gained by conducting longitudinal studies. Throughout the research process, strict ethical guidelines must be observed to protect the well-being, privacy, and rights of all participants..
This study proposes a logical Petri net model to leverage the modeling advantages of Petri nets in handling batch processing and uncertainty in value passing and to integrate relevant game elements from multi-agent game processes for modeling multi-agent decision problems and resolving optimization issues in dynamic multi-agent game decision-making. Firstly, the attributes of each token are defined as rational agents, and utility function values and state probability transition functions are assigned to them. Secondly, decision transitions are introduced, and the triggering of the optimal decision transition is determined based on a comparison of token utility function values, along with an associated algorithm. Finally, a dynamic game emergency business decision-making process for sudden events is modeled and analyzed using the logic game decision Petri net.
Based on reachable markings, reachable graphs are constructed to analyze the dynamic game process. Algorithms are described for the generation of reachable graphs, and the paper explores how the logic game decision model for sudden events can address dynamic game decision problems, generate optimal emergency plans, and analyze resource conflicts during emergency processes. The effectiveness and superiority of the model in analyzing the emergency business decision-making process for sudden events are validated. A sudden event is an emergency that poses direct risks and impacts human health, life, and property, requiring urgent intervention to prevent further deterioration. These intervention measures are organized into a process, which is typically described in an emergency plan and referred to as the emergency response process.
In this process, all emergency personnel are dedicated to managing disasters to minimize or avoid the secondary impacts of the disaster. Generating better contingency plans before emergency responses have become an urgent issue to address. The uncertainty of evacuation time during emergencies, and its stochastic analysis was conducted by coupling the uncertainty of fire detection, alarm, and pre-movement with evacuation time.The forecasting model is event-dependent and takes into account many social and environmental elements regarding different sorts of events, such as socio-economic situations and geographical features. This is due to the great range of emergency occurrences, including both natural and man-made ones. The business decision-making process in disaster operations management varies greatly depending on the type of occurrence, taking into account factors like severity, impacted region, population density, and local environment, among others.
There are many different types of hazards present worldwide. The health of vulnerable people is placed at risk by natural, biological, technological, and sociological dangers, which also have the potential to seriously impair public health. For instance, the authorities in-charge of providing clean water are responsible for the prevention of waterborne illnesses, while law enforcement and road transportation agencies are in charge of reducing traffic accidents. Zoonotic illnesses (diseases spread from animals to people) need coordinated action from the agricultural, environmental, and health sectors. These increases in new or reemerging diseases are attributed to a number of factors, including global warming, low vaccination rates in high-risk and vulnerable populations, growing vaccine resistance and skepticism, rising anti microbial resistance, and expanding coverage, frequency, and speed of international air travel. A professional who develops plans for emergencies, accidents, and other calamities is known as an emergency management director. Directors of emergency management work together with the leadership team of an organization to evaluate possible hazards and create best practices for handling them. Designing emergency procedures and developing preventative actions to lessen the risk of emergency circumstances occurring fall under their purview. Directors of emergency management play a crucial part in ensuring the safety of all employees and equipping staff to act effectively in case of an emergency. Plans for disaster preparation choose appropriate organizational resources, lay down the tasks and roles, establish rules and processes, and plan exercises to increase preparedness for disasters. The effectiveness of the response activities is improved when the needs of populations affected by catastrophes are anticipated. The effectiveness of the response operations is increased by increasing the ability of workers, volunteers, and disaster management teams to deal with crises. Plans could consist of the following: Sites for temporary refuge, and routes for evacuation water and energy sources for emergencies. Additionally, they might talk about stockpile requirements, communication protocols, training plans, chain of command, and training programs. One of the most crucial metrics for gauging the effectiveness of an evacuation is the time it takes.
Residents who are detained for an extended period of time represent a serious threat to staff safety because of the unpredictability of events. A building’s inhabitants who attempt to flee during a fire accident exhibit a range of response times (RTs) between the time they are given a warning and the decision to leave. A number of complex factors, such as occupants’ familiarity with evacuation routes, their ability to operate evacuation amenities and fire protection apparatuses, the number of people in the area,and occupants’ psychological and physical conditions and behaviors, can affect how affected personnel are evacuated from a disaster site. Different factors have an impact on evacuation time (ET). The results indicate that it is a variable influenced by a significant number of uncertain factors, including emergency evolution dynamics, human behavior under emergency conditions, and the environment. The benefits of developing appropriate emergency response plans using safety and industrial hygiene resources to mitigate or prevent harm to factory personnel and nearby community residents caused by chlorine gas leaks. Everyone on the team has to be knowledgeable on how to spot leaks and react to them in order to keep the employees safe when handling chlorine. Since chlorine has a strong, unpleasant scent that resembles that of a potent cleaning solution like bleach, most chlorine leaks are quite easy to detect. Every facility that works with chlorine has to have an emergency kit on hand. This kit should include a variety of tools that may be used to stop or limit leaks around plugs, valves, or the side wall of a tank or cylinder used to store chlorine. Breathe in some fresh air and leave the location where the chlorine gas was emitted. If the community has an emergency notification system, be sure they are familiar with it. For directions, consult local authorities and emergency bulletins. If the chlorine discharge occurred outside, seek protection inside.
To ensure that the contamination does not enter, make sure all windows are closed and ventilation systems are off. Leave the location where the chlorine was discharged if you are unable to get inside. Get outside and look for higher ground if the chlorine discharge occurred indoors. Open the windows and doors to the outdoors if the chlorine leak was caused by chemicals or home cleaners to allow infresh air. We focus on agent-based problem-solving strategies with business decision-making capabilities for CSC, which are based on Multi-criteria business decision-making methods (MCDM) methods for dealing with automated selection in CSC and PN techniques for modeling such context. Petri nets are used as modeling tools in the discrete-event dynamic process known as the multi-agent system. In comparison to alternating current micro grids, direct current micro-grids stand out for their ease of control and power management. They also offer a number of benefits, including higher conversion and transmission efficiency, greater reliability even in re-mote locations, convenient control, lower costs, and less filter effort due to the absence of reactive power, phase synchronization, high inrush current, etc. A rational actor must interact if enhancing subjective utility necessitates interaction with other agents. If there is contact between rational agents, at least one of the agents is trying to maximize his utility. Agents collaborate if their aims are the same. If their aims conflict, they engage in competition.
The majority of these interactions occur between these two extremes. An interacting agent would do well to predict the objectives of other agents. A more well-informed actor may foresee some aspects of how other agents will act in response to their objectives. In these situations, strategic thinking is required. A contact in which strategic thinking occurs is referred to as a strategic interaction (SI). In game theory, SI or games are examined. The game theory takes into account reason and the potential to forecast rational behavior. The existence of widespread awareness of reason is assumed. This implies that each participant in an interaction believes in there a son of the others and that they, in turn, believe in his rationality, and so on.The equilibrium is the expected behavior of players or participants in an interaction. If one of the players strays from equilibrium, nobody wins. Because of this, it is termed equilibrium. In finite games, there is at least one equilibrium. At least two application agent and mechanism designs are required for artificial intelligence games. We have a game in agent design and must calculate appropriate behavior. We have an expectation about the behavior and must develop game rules in mechanism design. These two goals can be addressed theoretically by running algorithms over a game tree, or practically by creating an environment in which various real players can interact. Most games are written in low-level programming. Game rules are more easily editable. Algorithms may be created that change game representation in every way imaginable, such as ‘reduce number of players’ or ‘remove simultaneous turns’.
Game representations may also be used to create evolutionary mechanisms. Logical Petri nets can further simplify the network structure of real-time system models, making it easier for us to analyze the properties of the system at a conceptual level, while also alleviating the problem of state space explosion to some extent. Petri nets can not only characterize the structure of a system but also describe its dynamic behavior. Currently, many scholars have proposed extended forms of Petri nets, such as logical Petri nets, timed Petri nets, and colored Petri nets, and their applications are becoming increasingly widespread. Multi-agent games involve multiple elements, such as players, strategies, utilities, and information equilibrium. The existing modeling elements of logical Petri nets cannot accurately describe these elements, so improvements need to be made to logical Petri nets. Based on the existing modeling elements of logical Petri nets, modifications or additions of new modeling elements are needed to model game elements, enabling the new model to accurately describe dynamic game problems in multi-agent systems.
We consider a mean-field game (MFG)-like scenario where a large number of agents must select between a set of various potential target destinations. This scenario is inspired by effective biological collective decision mechanisms such as the collective navigation of fish schools and honey bees searching for a new colony. The mean trajectory of all agents represents how each person impacts and is impacted by the group’s choice. The model can be seen as a stylized representation of opinion crystallization in a political campaign, for instance. The initial spatial position of the agents determines their biases initially, and then in a later generalization of the model, a combination of starting position and a priori individual preference. The existence criteria for the specified fixed point-based finite population equilibrium conditions are developed. In general, there may be several equilibria, and for the agents to compute them properly, they need to be aware of all the beginning circumstances.