Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Process Mining as the Superglue between Data and Process Management
Wil van der Aalst, RWTH Aachen University, Germany

From Data to the Press: Data Management for Journalism and Fact-Checking
Ioana Manolescu, Inria, France

 

Process Mining as the Superglue between Data and Process Management

Wil van der Aalst
RWTH Aachen University
Germany
 

Brief Bio
Prof.dr.ir. Wil van der Aalst is a full professor at RWTH Aachen University leading the Process and Data Science (PADS) group. He is also part-time affiliated with the Fraunhofer-Institut für Angewandte Informationstechnik (FIT) where he leads FIT's Process Mining group and the Technische Universiteit Eindhoven (TU/e). Until December 2017, he was the scientific director of the Data Science Center Eindhoven (DSC/e) and led the Architecture of Information Systems group at TU/e. Since 2003, he holds a part-time position at Queensland University of Technology (QUT). Currently, he is also a distinguished fellow of Fondazione Bruno Kessler (FBK) in Trento and a member of the Board of Governors of Tilburg University. His research interests include process mining, Petri nets, business process management, workflow management, process modeling, and process analysis. Wil van der Aalst has published more than 230 journal papers, 22 books (as author or editor), 530 refereed conference/workshop publications, and 80 book chapters. Many of his papers are highly cited (he one of the most cited computer scientists in the world and has an H-index of 148 according to Google Scholar with over 100,000 citations) and his ideas have influenced researchers, software developers, and standardization committees working on process support. He has been a co-chair of many conferences including the Business Process Management conference, the International Conference on Cooperative Information Systems, the International Conference on the Application and Theory of Petri Nets, and the IEEE International Conference on Services Computing. He is also editor/member of the editorial board of several journals, including Business & Information Systems Engineering, Computing, Distributed and Parallel Databases, Software and Systems Modeling, Computer Supported Cooperative Work, the International Journal of Business Process Integration and Management, the International Journal on Enterprise Modelling and Information Systems Architectures, Computers in Industry, IEEE Transactions on Services Computing, Lecture Notes in Business Information Processing, and Transactions on Petri Nets and Other Models of Concurrency. He is also a member of the Council for Physics and Technical Sciences of the Royal Netherlands Academy of Arts and Sciences and serves on the advisory boards of several organizations, including Fluxicon, Celonis, Processgold, and Bright Cape. In 2012, he received the degree of doctor honoris causa from Hasselt University in Belgium. He also served as scientific director of the International Laboratory of Process-Aware Information Systems of the National Research University, Higher School of Economics in Moscow. In 2013, he was appointed as Distinguished University Professor of TU/e and was awarded an honorary guest professorship at Tsinghua University. In 2015, he was appointed as honorary professor at the National Research University, Higher School of Economics in Moscow. He is also an IFIP Fellow and elected member of the Royal Netherlands Academy of Arts and Sciences (Koninklijke Nederlandse Akademie van Wetenschappen), Royal Holland Society of Sciences and Humanities (Koninklijke Hollandsche Maatschappij der Wetenschappen), and the Academy of Europe (Academia Europaea). In 2018 he was awarded an Alexander-von-Humboldt Professorship, Germany’s most valuable research award (five million euros).


Abstract
Process mining is able to reveal how people and organizations really function. Often reality is very different and less structured than expected. Process discovery exposes the variability of real-life processes. Conformance checking is able to pinpoint and diagnose compliance problems. Task mining exploits user-interaction data to enrich traditional event data. All these different forms of process mining can and should support Robotic Process Automation (RPA) initiatives. Process mining can be used to decide what to automate and to monitor the cooperation between software robots, people, and traditional information systems. In the process of deciding what to automate, the Pareto principle plays an important role. Often 80% of the behavior in the event data is described by 20% of the trace variants or activities. An organization can use such insights to "pick its automation battles", e.g., analyzing the economic and practical feasibility of RPA opportunities before implementation. This paper discusses how to leverage the Pareto principle in RPA and other process automation initiatives.



 

 

From Data to the Press: Data Management for Journalism and Fact-Checking

Ioana Manolescu
Inria
France
 

Brief Bio
Ioana Manolescu is the lead of the CEDAR team, joint between Inria Saclay and the LIX lab (UMR 7161) of Ecole polytechnique, in France. The CEDAR team research focuses on rich data analytics at cloud scale. She is a member of the PVLDB Endowment Board of Trustees, and a co-president of the ACM SIGMOD Jim Gray PhD dissertation committee. Recently, she has been a general chair of the IEEE ICDE 2018 conference, an associate editor for PVLDB 2017 and 2018, and the program chair of SSDBBM 2016. She has co-authored more than 130 articles in international journals and conferences, and contributed to several books. Her main research interests include data models and algorithms for computational fact-checking, performance optimizations for semistructured data and the Semantic Web, and distributed architectures for complex large data. She is also the scientific director of LabIA, a French government initiative for adopting AI tools in the public administration.


Abstract
Modern societies crucially rely on the availability of free media. While any citizen has today access to the necessary tools to publish content and debate, the best standards for reliable, verified reporting and for well-structured debates are still held by professional journalists. Historically confined to newsrooms and performed before publication, verification of claims (aka fact-checking) has now become a very visible part of journalists' activity; the importance of some topics under discussion (e.g., large-scale pollution or the national economy) has also attracted fact-checkers outside the journalism industry, such as scientists, NGOs etc. In this talk, I will outline a vision of Journalistic Dataspaces, as an environment and set of tools that should support journalists and/or fact-checkers by means of digital content management. This draws upon the recent years of collaboration with journalists from French media, notably Le Monde's fact-checking team "Les Décodeurs" and Ouest France, a large regional newspaper, as well as many academic colleagues. I will highlight the common needs of fact-checking and modern ("data") journalism, show how existing tools from the database, information retrieval, knowledge representation and natural language processing can help realize this vision. I will also discuss the main technical and organizational challenges toward realizing this vision. Most of this work is part of the [ http://contentcheck.inria.fr/ | ANR ContentCheck project ] .



footer