Tel No.+86-135 4138 2102 / +86-28-87555888

Invited Speakers

Associate Professor Yi Zhang

University of Technology Sydney, Australia

Bio: Dr Yi Zhang is an Associate Professor at the Australian Artificial Intelligence Institute (AAII) and the School of Computer Science in the Faculty of Engineering and Information Technology, University of Technology Sydney (UTS), Australia. He obtained a dual-PhD degree in Management Science & Engineering (from Beijing Institute of Technology, China, 2016) and Software Engineering (UTS, 2017). He was a visiting scholar at the School of Public Policy, Georgia Institute of Technology (2011-2012) and the School of Mathematics and Statistics, University of Melbourne (2022-2023).
He was the awardee of the 2019 Discovery Early Career Researcher Award (DECRA) granted by the Australian Research Council (ARC) and the 2023 Research Award from The Australian, recognised as the Research Field Leader in Australia’s Library and Information Science discipline.
He has published over 120 high-quality research articles, aligning with his cross-disciplinary interests in artificial intelligence for science, technology, and innovation studies (AI for ST&I). He is an Executive Editor for Technological Forecasting & Social Change, a Specialty Chief Editor for Frontiers in Research Metrics and Analytics, and an Associate Editor for IEEE Transactions on Engineering Management and Scientometrics.

Title: Artificial Intelligence for Science, Technology, & Innovation

Abstract: Artificial intelligence (AI) has been revolutionising the thinking behaviours and paradigms of the human society, while its Pandora’s Box brings unpredictable challenges and risks to governance and regulation. Down to the broad realm of science, technology, and innovation (ST&I) studies, it has been rising interests to leverage AI’s advancements to enhance analytical capabilities in proposing comprehensive measurements, discovering complicated relationships, and predicting future dynamics. In this talk, I will discuss the current interactions between ST&I challenges and AI-empowered solutions, with a particular focus on understanding technological change through large-scale literature analysis, which includes approaches such as streaming data analytics, heterogeneous graph mining, graph learning techniques, and large language models.

Dr. Junyu Xuan

Senior Lecturer
University of Technology Sydney, Australia

Bio: Dr Junyu Xuan is an IEEE/ACM Senior Member, ISBA/BNP Life Member, ARC Discovery Early Career Researcher Award (DECRA) Fellow, and Senior Lecturer of Australia Artificial Intelligence Institute in the Faculty of Engineering and IT at the University of Technology Sydney (UTS). His research interests include Probabilistic Machine Learning, Bayesian Nonparametric Learning, Bayesian Deep Learning, Reinforcement Learning, Text Mining, Graph Neural Networks, etc. He has published over 60 papers in high-quality journals and conferences, including Artificial Intelligence Journal, Machine Learning Journal, IEEE TNNLS, IEEE TKDE, ACM Computing Surveys, ICDM, NIPS, AAAI, etc. He served as PC or Senior PC member for conferences, e.g. NIPS, ICML, UAI, ICLR, AABI, IJCAI, AAAI, EMNLP, etc.

Title: Functional Bayesian Deep Learning: Beyond Function Approximation to Function Distribution Approximation

Abstract: Bayesian deep learning (BDL) is an emerging field that combines the strong function approximation power of deep learning with the uncertainty modelling capabilities of Bayesian inference. This synergy is poised to enhance model generalization and robustness, offering valuable uncertainty estimations for a range of safety-critical applications, including medical diagnostics, diabetes detection, autonomous driving, and civil aviation.  Despite these advantages, the fusion introduces complexities to classical posterior inference in parameter space, such as nonmeaningful priors, intricate posteriors, and possible pathologies. This talk will delve into the driving forces, concepts, and methodologies underpinning BDL in function space, segueing into pivotal technological breakthroughs and their applications in machine learning tasks. To conclude, we will explore the prevailing hurdle faced by BDL.

Professor Hang Yu

Shanghai University, China

Bio: Hang Yu is a currently a Full Professor with the School of Computer Engineering and Science Shanghai University, where he is the Associate Director of the Institute of Urban Renewal and Sustainable Development of Shanghai. His research interests include large models, graph machine learning, and generative intelligence. He has published more than 100 papers, including in high-profile journals such as IEEE-TKDE, IEEE-TIFS, IEEE-TNNLS, IEEE-TCYB, IEEE-TFS and leading conferences recommended by the China Computer Federation (CCF) such as ACL, AAAI, CVPR, ACMMM and WWW. He has won more than 10 research grants such as Young Scientists Fund of National Natural Science Foundation of China (NSFC) and Shanghai Science and Technology Committee (SSTC) in the last 5 years. He serves as Senior Associate Editor for Knowledge-Based Systems (Elsevier) and has served as a guest editor of 4 special issues for IEEE transactions and other international journals.

Title: Multi-Scenario Applications of Graph Learning: Research on Knowledge Graphs and Fraud Detection

Abstract: This report centers on graph learning technologies and systematically explores their application mechanisms in two major scenarios: knowledge graphs and fraud detection. Although knowledge graphs and fraud detection have different task objectives, both are based on graph-structured data, for which graph learning provides efficient modeling and analytical approaches. In the domain of knowledge graphs, the focus is on tasks such as graph completion, link prediction, and knowledge question answering. The report deeply analyzes the capability of graph learning models to extract topological features of entities and relationships, elucidating how these models exploit graph structural patterns to achieve key functions, including missing knowledge completion, entity association prediction, and semantic question answering and reasoning. For the fraud detection scenario, the report mainly analyzes application strategies of graph learning to address three core challenges: (1) overcoming the bottleneck of insufficient labeled data through semi-supervised learning methods, (2) enabling real-time updates of fraud patterns via dynamic graph learning mechanisms, and (3) achieving collaborative detection under privacy protection using federated graph learning frameworks.

Prof. Dr.-Ing. habil. Dr. h.c. Herwig Unger

Head of the Department of Communication Networks
FernUniversität in Hagen, Germany

Bio: Prof. Dr.-Ing. habil. Dr. h.c. Herwig Unger (*1966) received his PhD with a work on Petri Net transformation in 1994 from the Technical University of Ilmenau and his doctorate (habilitation) with a work on a fully decentralised web operating systems from the University of Rostock in 2000. Since 2006, he is a full professor at the FernUniversität in Hagen and the head of the Department of Communication Networks. In 2019, he obtained honorary PhD in Information Technology from the King Mongkut's University of Technology in North Bangkok (Thailand). His research interests are in decentralised systems and self-organization, natural language processing, Big Data as well as large scale simulations. He has published more than 150 publications in refereed journals and conferences, published or edited more than 30 books and gave over 35 invited talks and lectures in 12 countries. Beside various industrial cooperations, e.g. with Airbus Industries, he has been a guest researcher/professor at the ICSI Berkeley, University of Leipzig, Universitè de Montreal (Canada), Universidad de Guadalajara (Mexico) and the King Mongkut's University of Technology North Bangkok.

Title: A Brain Inspired Approach to Sequence Learning for Natural Language Processing

Abstract: This talk bridges insights from neuroscience and machine learning to explore how the brain’s predictive architecture can inspire next-generation models for sequence processing. Beginning with a concise overview of basic neuroscience research, we highlight evidence that the human brain functions as a prediction machine, continuously anticipating sequential patterns in sensory input. This framework motivates a paradigm shift in computational approaches to language, moving beyond static representations to dynamic, prediction-driven models.
Central to this discussion are the theories of Jeff Hawkins, whose work posits that intelligence arises from the brain’s ability to learn and predict sequences through cortical hierarchies. We demonstrate how these principles translate to natural language processing (NLP), enabling the construction of words from syllables and sentences from reusable "semantic units" via hierarchical, context-sensitive prediction.
Building on these ideas, the final part of the talk introduces the GraphLearner, a novel model designed to efficiently learn and predict complex sequences with long-range dependencies. Unlike traditional Markovian approaches, the GraphLearner leverages dynamic graph structures and is also able to model hierarchical relationships and parallel processing pathways. The talk concludes with implications for NLP, cognitive modeling, and the future of biologically inspired AI.

Associate Professor Phayung Meesad

Director of Central Library
King Mongkut’s University of Technology North Bangkok, Thailand

Bio: Dr. Phayung Meesad is an Associate Professor in the Faculty of Information Technology and Digital Innovation at King Mongkut’s University of Technology North Bangkok (KMUTNB), where he also serves as Director of the Central Digital Library. Formerly, he was Dean of the Faculty of Information Technology and Digital Innovation, contributing significantly to academic leadership and institutional digital transformation. He has a B.Sc. in Technical Education (Electrical Engineering) from KMUTNB (1994), and both M.S. and Ph.D. degrees in Electrical Engineering from the School of Electrical and Computer Engineering, Oklahoma State University, USA, awarded in 1998 and 2002, respectively. Dr. Meesad’s research interests span a broad range of areas, including Artificial Intelligence (AI), Computational Intelligence, Machine Learning, Deep Learning, Data Science, Big Data Analytics, Time Series Forecasting, Natural Language Processing (NLP), Digital Signal and Image Processing, Business Intelligence, and Cloud and Parallel Computing. He is a prolific contributor to scholarly research, having published numerous peer-reviewed journal articles, conference proceedings, and academic books in AI and data-driven systems.

Title: Data-Driven Library Management: Harnessing Analytics to Empower Communities

Abstract: Libraries are sitting on hidden data capabilities that are mainly underused, which could help unleash new knowledge in engineering and service delivery tricks. The talk focuses on the provider’s view of library innovation with data as a core, where user behavior logs, feedback streams, and semantic metadata are changed to easily applicable insights using machine learning and predictive analytics. The presentation of a dataset from KMUTNB’s Smart Digital Library will show not only that but also the skills of the latest analytics techniques, including user segmentation, intent prediction, and feedback-informed content curation, which in turn give evidence-based decision-making and tailored services. The conference will introduce a framework for a modular analytics-enabled library platform, which covers ontology-based reasoning, recommender systems, and real-time dashboards. Being ethical while taking care of the data and using a privacy-preserving type of analytics are the foundations of this transformation.

Univ.-Prof. Dr.-Ing. Kyandoghere Kyamakya

University of Klagenfurt, Austria

Bio: TBA

Title: Quantum-Flare CeNN Edge Suite: An Ultrafast Autoencoder-Enhanced, Logic-Driven Reservoir Computing Concept that Outclasses Transformers in Theory and Efficiency from Tokens to Turbulence

Abstract: CeNN-NeuroLogiX-Q is presented as a conceptual and design-stage framework that re-imagines transformer-era deep learning through a CeNN-centred, quantum-inspired reservoir coupled with autoencoders, sparse attention and symbolic logic. The heart of the system is a soft-quantum Memristor Cellular Neural Network (Q-MCeNN) operating in echo-state mode; its weights remain fixed, so only a lightweight read-out and an autoencoder front-end require training. This yields a projected (roughly estimated) 10–100 × reduction in training compute and 5–20 × lower inference energy while meeting or surpassing transformer accuracy on benchmark NLP, computer-vision and forecasting tasks. The entire stack weighs in at fewer than five million parameters plus kilobytes of ASP rules, yet delivers deterministic, millisecond-scale latency and can be mapped with equal ease to CPUs, microcontrollers or memristor crossbars—making it suitable for edge and safety-critical deployments in healthcare, finance and autonomous systems where auditability is mandatory.
Four design variants illustrate breadth and feasibility. "CeNN-NeuroLogiX-Q.AE-NLP" targets language modelling, question answering and entailment detection: AE modules distil sub-word embeddings, the reservoir captures long-range syntax, and ASP enforces grammatical or factual constraints, potentially delivering transformer-level BLEU and GLUE scores with a fraction of the energy budget. "CeNN-NeuroLogiX-Q.AE-Vision" swaps ViT-style patching for AE-compressed latent tiles and uses Answer-Set Programming to enforce geometric or physical scene rules, promising robust, explainable perception on milliwatt edge cameras. „CeNN-NeuroLogiX-Q.AE-TSF-UQ“ combines dropout sampling, Bayesian or quantile read-outs and symbolic consistency checks to deliver calibrated forecasts—and anomaly flags—for markets, smart grids and industrial IoT. Finally, "CeNN-NeuroLogiX-Q.AE-PDE" compresses high-dimensional fields, advances them through reservoir dynamics and applies ASP-encoded boundary conditions to solve ODEs and PDEs—such as heat diffusion or Navier–Stokes—at one to two orders of magnitude faster than physics-informed neural networks in preliminary small-scale simulative experiments.
A systematic feasibility assessment— covering plannings of component-level simulations, energy‐throughput modelling and rule-integration studies—confirms conceptually the architectural soundness and quantifies through estimations expected gains. Implementations, silicon realisation and full-scale benchmarkings remain future works, forming the next milestones in converting this blueprint into tangible hardware and open-source software artefacts. Nonetheless, the present design demonstrates how a CeNN reservoir (Q-MCeNN), augmented by autoencoders, sparse attention and symbolic reasoning, can channel the transformative power of attention into a lean, traceable and multi-domain AI engine that forecasts with confidence, understands language and vision, and honours explicit physical laws when solving differential equations. NeuroLogiX-Q therefore charts a coherent path toward explainable, energy-frugal, and regulation-ready intelligence that transcends transformer-only paradigms.