Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

On Knowledge Discovery and Interactive Intelligent Visualization of Biomedical Data - Challenges in Human-Computer Interaction & Biomedical Informatics
Andreas Holzinger, Human-Centered AI Lab, Institute for Medical Informatics and Statistics, Medical University Graz, Austria

Visualization & Data Mining for High Dimensional Datasets
Alfred Inselberg, School of Mathematical Sciences, Tel Aviv University, Israel

Issues in Combined Static and Dynamic Data Management
Daniela Nicklas, Department für Informatik, Carl von Ossietzky Universitaet Oldenburg, Germany

 

On Knowledge Discovery and Interactive Intelligent Visualization of Biomedical Data - Challenges in Human-Computer Interaction & Biomedical Informatics

Andreas Holzinger
Human-Centered AI Lab, Institute for Medical Informatics and Statistics, Medical University Graz
Austria
https://www.aholzinger.at/
 

Brief Bio
Andreas Holzinger is lead of the Holzinger Group (Human-Centered AI) at the Medical University Graz and Visiting Professor for explainable AI at the Alberta Machine Intelligence Institute in Edmonton, Canada. Since 2016 he is Visiting Professor for Machine learning in health informatics at Vienna University of Technology. Andreas was Visiting Professor for Machine Learning and Knowledge Extraction in Verona, RWTH Aachen, University College London and Middlesex University London. He serves as consultant for the Canadian, US, UK, Swiss, French, Italian and Dutch governments, for the German Excellence Initiative, and as national expert in the European Commission. Andreas obtained a Ph.D. in Cognitive Science from Graz University in 1998 and a second Ph.D. (Habilitation) in Computer Science from TU Graz in 2003. Andreas Holzinger works on Human-Centered AI (HCAI), motivated by efforts to improve human health. Andreas pioneered in interactive machine learning with the human-in-the-loop. For his achievements, he was elected as a member of Academia Europea in 2019. Andreas is paving the way towards multimodal causability, promoting robust interpretable machine learning, and advocating for a synergistic approach to put the human-in-control of AI and align AI with human values, privacy, security, and safety.


Abstract
Biomedical Informatics can be defined as “the interdisciplinary field that studies and pursues the effective use of biomedical data, information and knowledge for scientific inquiry, problem solving, and decision making, motivated by efforts to improve human health.” However, professionals in the life sciences are faced with an increasing quantity of highly complex, multi-dimensional and weakly structured data. While researchers in Human-Computer Interaction (HCI) and Information Retrieval/Knowledge Discovery in Databases (IR/KDD) have long been independently working to develop methods that can support expert end users to identify, extract and understand information out of this data, it is obvious that an interdisciplinary approach to bring these two fields closer together can yield synergies in the application of these methods to weakly structured complex medical data sets. The aim is to support end users to learn how to interactively analyse information properties and to visualize the most relevant parts – in order to gain knowledge, and finally wisdom, to support a smarter decision making. The danger is not only to get overwhelmed by increasing masses of data, moreover there is the risk of modelling artifacts.



 

 

Visualization & Data Mining for High Dimensional Datasets

Alfred Inselberg
School of Mathematical Sciences, Tel Aviv University
Israel
http://www.cs.tau.ac.il/~aiisreal/
 

Brief Bio
Alfred Inselberg received a Ph.D. in Mathematics and Physics from the University of Illinois (Champaign-Urbana) then was Research Professor there until 1966. He held research positions at IBM, where he developed a Mathematical Model of Ear (TIME Nov. 74), concurrently having joint appointments at UCLA, USC and later at the Technion and Ben Gurion University. Since 1995 he is Professor at the School of Mathematical Sciences at Tel Aviv University. He was elected Senior Fellow at the San Diego Supercomputing Center in 1996, Distinguished Visiting Professor at Korea University in 2008 and Distinguished Visiting Professor at National University of Singapore in 2011. Alfred invented and developed the multi-dimensional system of Parallel Coordinates for which he received numerous awards and patents (on Air Traffic Control, Collision-Avoidance, Computer Vision, Data Mining). The textbook "Parallel Coordinates: VISUAL Multidimensional Geometry and its Applications", Springer (October) 2009, has a full chapter on Data Mining and was acclaimed, among others, by Stephen Hawking.


Abstract
A dataset with M items has 2^M subsets anyone of which may be the one which satisfies our objectives. With a good data display and interactivity our fantastic pattern-recognition can cut great swaths searching through this combinatorial explosion and also extract insights from the visual patterns. These are the core reasons for data visualization. With parallel coordinates the search for relations in multivariate datasets is transformed into a 2-D pattern recognition problem. The foundations are developed interlaced with applications. Guidelines and strategies for knowledge discovery are illustrated on several real datasets (financial, process control, credit-score, intrusion-detection etc) one with hundreds of variables. A geometric classification algorithm is presented and applied to complex datasets. It has low computational complexity providing the classification rule explicitly and visually. The minimal set of variables required to state the rule (features) is found and ordered by their predictive value. Multivariate relations can be modeled as hypersurfaces and used for decision support. A model of a (real) country’s economy reveals sensitivies, impact of constraints, trade-offs and economic sectors unknowingly competing for the same resources. An overview of the methodology provides foundational understanding; learning the patterns corresponding to various multivariaterelations. These patterns are robust in the presence of errors and that is good news for the applications. We stand at the threshold of breaching the gridlock of multidimensional visualization. The parallel coordinates methodology has been applied to collision avoidance and conflict resolution algorithms for air traffic control (3 USA patents), computer vision (1 USA patent), data mining (1 USA patent), optimization, decision support and elsewhere.



 

 

Issues in Combined Static and Dynamic Data Management

Daniela Nicklas
Department für Informatik, Carl von Ossietzky Universitaet Oldenburg
Germany
 

Brief Bio
Daniela Nicklas is a Junior Professor for Database and Internet technologies at the Carl von Ossietzky Universität Oldenburg since 2008. Besides her research on context-aware applications and context modeling, she works on data stream processing and sensor-based systems within the application domains of transportation, energy, and ambient environments. She received her PhD (Dr. rer. nat.) in Computer Science 2008 from the University of Stuttgart, working within the Center of Collaboration (SFB 627) "Nexus" under the supervision of Prof. Dr. Bernhard Mitschang. The main contribution if her thesis was architecture and a spatial data model for large-scale integration of context data for location-based, mobile applications. After that, she worked for two years as PostDoc in that Center of Project, co-leading a project on context-aware workflows together with Prof. Dr. Frank Leymann. In 2008, she received an IBM Exploratory Stream Analytics Innovation award for "Data Stream Technology for Future Energy Grid Control". Besides being a committee member in many programme committees of pervasive computing and database conferences and workshops, she helped organizing international conference, like e.g., the Annual IEEE International Conferences on Pervasive Computing and Communications (PerCom) as Vice Program Chair in 2010. She is also a member of the editorial boards of the Elsevier Pervasive Computing and Communication Journal, the International Journal of Pervasive Computing and Communications (Emerald), and the Datenbankspektrum (German Journal on Databases).


Abstract
With the upcoming widespread availability of sensors, more and more applications depend on physical phenomena. Up-to-date real world information is embedded into business processes, in production environments, or in mobile applications, so that such context-aware applications can adapt their behavior to the current situation of their user or environment. Another example are so-called SCADA systems (supervisory control and data acquisition), where complex installations (e.g., energy grids or power plants) are monitored and controlled. In general, data can be managed either by a database management system (DBMS), or directly by the application. The first approach has many advantages: information demands can be declared by queries and kept separately from the application. When the information demand changes, only the query has to be changed (which dramatically decreases software maintenance costs). In addition, a DBMS can optimize the query execution, so that the requested data is retrieved efficiently from the systems. However, if applications depend on real-world data, the amount of data, the update rate, and low latency requirements often prevent the storage in a DBMS. Thus, the data streams from the sensors are managed directly by applications, with all its drawbacks. The goal of data stream management is to provide the same flexibility and data independence for data streams as for stored data. However, despite of performance advantages, DSMS represent no general solution since applications often require the persistent storage of continuous query results as well as a combined processing of current and historical stream data. The performance of integrated database methods, on the other hand, is limited due to the expensive data management of traditional DBMS. A naïve solution would be to federate or integrate both types of systems. However, a closer look shows that such a federation raises many open questions



footer