Prof. Franck Leprévost 

University of Luxembourg, Luxembourg

Prof. Franck Leprévost 

Rutgers University, USA

Prof. Dr. David Coit

Keio University, Japan

Prof. Franck Leprévost 

University of Luxembourg

Recently, large language models have achieved state-of-the-art performance in code generation. Models such as ChatGPT can generate syntactically correct, highly optimized, and human-like code with remarkable accuracy. Additionally, tools like GitHub Copilot further enhance these models by assisting developers in writing high-quality code. As a result, LLMs are increasingly being adopted for code generation and programming-related tasks, transforming modern software development practices.

While LLMs offer numerous benefits, their widespread use also raises critical concerns regarding academic integrity, authorship attribution, and copyright infringement. One observes that students increasingly rely on AI systems to complete assignments and projects, and job candidates to misrepresent their programming abilities during technical interviews while they are not supposed to. Consequently, the ability to distinguish between human-written and AI-generated code is becoming increasingly important and leads to a growing interest in reliable methods for the automatic detection of AI-generated code. We will present some of the most promising methods used to address this issue, and their limits.  

 Prof. Franck Leprévost is a French mathematician and computer scientist, professor at the University of Luxembourg since 2003 and its former Vice-President (2005–2015). Previously a CNRS researcher in Paris and professor in Grenoble, he has held visiting positions at leading institutions across Europe and Asia. His work spans algorithmic number theory, cryptology, deep learning, AI, and evolutionary algorithms, as well as the management and strategic development of higher-education and research systems.

Prof. Dr. David Coit

Rutgers University, USA

The most effective and useful application of predictive maintenance requires both operations research models and machine learning or AI working together. Historically, development of maintenance planning models involved the rigorous application of reliability and maintenance probabilistic models and theory combined with formal analytical tools associated with operations research and mathematical programming. These models, while effective, produced static and non-changing strategies that do not reflect the realities of changing usage and environment conditions. Furthermore, they are based on population characteristics and do not adequately reflect individual differences of units within the population or system. Alternatively, predictive maintenance models which embrace machine learning can dynamically predict a remaining useful life or RUL. These models do specifically reflect individual units within the system and can also adapt to changing conditions. However, their true effectiveness also requires a meaningful decision-rule specifying when to take action and what action, i.e., either replace or repair or dynamically reduce work-load to compensate for anticipated degradation. To be truly effective, a combination of these philosophies is needed. The usefulness of predictive maintenance decision rules require the three-way integration or intersection of machine learning, reliability/maintenance theory and operations research. In this speech, we will summarize different approaches to preventive or predictive maintenance models, discuss their relative advantages and disadvantages, highlight a few notable examples to demonstrate this three-way integration, and finally present future research challenges.

David Coit is a Professor in the Department of Industrial & Systems Engineering at Rutgers University, Piscataway, NJ, USA. He has also had visiting professor positions at Universite Paris-Saclay, Paris, France and Tsinghua University, Beijing, China. His current teaching and research involves system reliability modeling and optimization, and energy systems optimization. He has over 140 published journal papers and over 100 peer-reviewed conference papers (h-index 69), including the most highly cited paper ever in Reliability Engineering & System Safety (RESS) and the 4th most cited paper in IEEE Transactions on Reliability. He is currently an Associate Editor for RESS and Journal of Risk and Reliability and was previously an Associate or Department Editor for IEEE Transactions on Reliability andIISE Transactions. His research has been funded by USA National Science Foundation (NSF), including a NSF CAREER grant to develop new reliability optimization algorithms considering uncertainty. He has been the recipient of the P. K. McElroy award, Alain O. Plait award and William A. J. Golomski award for best papers and tutorials at the Reliability and Maintainability Symposium (RAMS). Prof. Coit received a BS degree in mechanical engineering from Cornell University, an MBA from Rensselaer Polytechnic Institute (RPI), and MS and PhD in industrial engineering from the University of Pittsburgh. He is a fellow of the Institute of Industrial & Systems Engineers (IISE).

Rodney Van Meter

Keio University, Japan

to Scalable Quantum Computing

Quantum multicomputers — modular systems built from smaller quantum nodes coupled together using an interconnection network — were first proposed as a route to scalable quantum computation two decades ago. Key ideas were studied in the 2005-2015 time frame, then a lull ensued as researchers and developers focused on near-term, single-device NISQ systems. Commercial roadmaps now point toward reaching fault-tolerant, single-device limits within this decade. Activity in multicomputers has blossomed over the last three years, mostly centered on mechanisms for making high-fidelity inter-node entanglement. Our own work began in the early 2000s with top-down designs, focusing on workloads, network topologies, error correction and techniques for distributed computation. Today, our ideas are being realized in the Q-Fly experimental network. I will review that recent progress and address the open issues in extending from single links to networks.

Rodney Van Meter received a B.S. in engineering and applied science from the California Institute of Technology in 1986, an M.S. in computer engineering from the University of Southern California in 1991, and a Ph.D. in computer science from Keio University in 2006. His current research centers on quantum computer architecture, quantum networking and quantum education. He is the author of the book Quantum Networking. Other research interests include storage systems, networking, and post-Moore’s Law computer architecture. He is now a Professor of Environment and Information Studies at Keio University’s Shonan Fujisawa Campus. He is the Vice Center Chair of Keio’s Quantum Computing Center, co-chair of the Quantum Internet Research Group, a leader of the Quantum Internet Task Force, and a board member of the WIDE Project. Dr. Van Meter is a member of AAAS, ACM, APS, and IEEE. He is currently Editor in Chief of IEEE Transactions on Quantum Engineering, but this talk is 100% personal opinions.