2025 International Conference on Digital Management and Information Technology (DMIT 2025)
Speakers
Home / Speakers



Speakers

F7574396-6BDC-4977-92D3-3F7A7135BB9B.png


Prof. Hamed  Taherdoost,  University Canada West, Canada & GUS Institute | Global University Systems, UK

Biography:  Prof. Hamed Taherdoost holds PhD of Computer Science and Master of Information Security. He has over 20 years of experience in both industry and academic sectors. He has worked at international companies from Cyprus, the UK, Malta, Iran, Malaysia, and Canada and has been highly involved in development of several projects in different industries including oil and gas, healthcare, transportation, and information technology, holding positions as varied as Project Manager, R&D Manager, Tech Lead, and CTO. He has spent the last nine years helping start-ups to grow by implementing new projects and business lines.

TitleBlockchain and Big Data: Opportunities, Challenges, and Future Perspective

Abstract: In recent years, a great deal of interest has been generated in a variety of scientific and engineering fields due to big data. Despite its many applications and advantages, big data presents numerous issues that need to be addressed to improve service quality, such as big data security and privacy, big data management, and big data analytics. Due to its secure and decentralized nature, blockchain has the potential to greatly improve big data applications and services. Aside from this, the use of Blockchain seems crucial in providing an additional data layer to the process of big data analytics. Big data is seen as secure, as it cannot be fortified using the network architecture. In this speech, first, an overview of big data and blockchain is provided, as well as the rationale for their integration. Next, several blockchain services for big data, such as blockchain for safe big data acquisition, data protection, data analytics, and data storage are discussed.



Prof. Jianhua Zhang, Oslo Metropolitan University, Norway


Biography: Prof. Jianhua Zhang has been Professor in Computer Science at Oslo Metropolitan University (OsloMet), Norway since 2018. Before joining OsloMet, he spent a stint working at a French IT company, Vekia (Lille, France) as Scientific Director. He was Professor at East China University of Science and Technology (Shanghai, China) between 2007 and 2017. He received PhD from Ruhr University Bochum, Germany and did postdoc. at The University of Sheffield, UK. He was Guest Scientist at Dresden University of Technology, Germany from 2002 to 2003 and Visiting Professor at Technical University of Berlin, Germany during 2008-2015. His research interests include computational intelligence, machine learning and pattern recognition, data modeling and analytics, intelligent systems and control, modeling and control of complex systems, biomedical signal processing and data analysis, and neurocomputing (in particular, neuroergonomics and affective computing). He serves as Chair of IFAC (International Federation of Automatic Control) Technical Committee on Human-Machine Systems for two consecutive terms (2017-2023), Vice Chair of IEEE Norway Section, and Vice Chair of IEEE CIS Norway Chapter. He is on editorial board of 4 international scientific journals, including Frontiers in Neuroscience, Cognitive Neurodynamics, and Cognition, Technology and Work. He was chair or keynote speaker for a number of international scientific conferences.

TitleStock Market Forecasting via Transformer Models

Abstract: This talk examines the effectiveness of various models, such as traditional models like ARIMA and Linear Regression and advanced machine learning (ML) models like Long Short-Term Memory (LSTM) networks, Prophet, and Transformers, in predicting stock prices. Ensemble learning, which combine predictions of different models to reduce their bias and variance, are also examined. Furthermore, optimizing ML models for stock price prediction requires hyperparameter tuning to ensure that models are effective and efficient in capturing the complexities and volatility of real-world financial markets. To maximize accuracy while minimizing computational costs, different hyperparameter (such as learning rate, number of layers in the network, and batch size) tuning strategies are investigated, including grid search, random search, and Bayesian optimization.
The extensive real stock data analysis experiments showed that the transformer model stacked with linear regression achieves superior prediction performance. We found that it is important to adapt model architectures to specific stock market characteristics and that ensemble learning methods can improve the accuracy and reliability of stock time series forecasting. The findings of this study may provide useful insight into the proper selection of model class for stock closing price dynamics as well as informed stock investment decisions and portfolio management.


lQLPKHiSpfv50-HNAojNAeawNfgVBCB_GVEGBde703iWAA_486_648.png


Assoc. Prof. Minghan Li,  Soochow University, China

Biography:  Minghan Li, holds a Ph.D. in Computer Science and is currently an associate professor at Soochow University, Suzhou, China. He graduated with his doctorate from Université Grenoble Alpes, France, between 2020 and 2023, and previously obtained his bachelor's degree in Information Engineering from Xi'an Jiaotong University and a master's degree in Computer Technology from Xidian University. As a first author, he has published multiple papers in key journals and international conferences in his field, including the ACM Transactions on Information Systems and ACM SIGIR, and he has two authorized software copyrights. Dr. Li received the French IDEX International Mobility Scholarship, and conducted a visiting scholar at the National University of Singapore. He has served as a reviewer for high-level journals such as Information Processing & Management, ACM Transactions on Information Systems, and Artificial Intelligence, and was a PC member for ECIR 2024. His current research focuses on information retrieval (search algorithms) with large language models, dialogue systems, and question-answering systems. Additionally, he possesses extensive programming experiences with enterprise and multiple platforms.

Title:Information Retrieval: Evolving from Pre-trained to Large Languages Models

Abstract: Since the introduction of the Transformer architecture, pre-trained language models have demonstrated significant efficacy across various tasks in natural language processing. In the domain of information retrieval (IR), particularly concerning text relevance ranking, numerous models have been proposed, continually updating the algorithms used in everyday search engines. This report will introduce a variety of algorithms based on pre-trained language models, including interaction-based, representation-based approaches and others. Furthermore, given the recent emergence of large language models (LLMs) in the field of artificial intelligence, the report will also cover IR algorithms based on LLMs, as well as the role of IR in augmenting dialogue and question-answering capabilities of LLMs.

11111.jpg