Merry Christmas from ScaDS.AI
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Mittwoch, 23. Dezember 2020 10:56
-
Veröffentlicht: Mittwoch, 23. Dezember 2020 10:15
-
Geschrieben von Miriam Herdt
A Data Science Christmas
Even though there are a lot of issues we need to think about and there are a lot of concerns about hygiene precautions and the upcoming holidays, we wanted to wish you all a Merry Christmas and therefore give you something to smile about. Maybe you (or maybe your children) have wondered whether it’s going to snow on Christmas in Leipzig. For all of you who have asked themselves this question, one fellow researcher of ScaDS.AI Leipzig built a forecasting model based on open data from the German Weather Service. The model maybe can tell us, if it’s really going to snow on Christmas. But how did he do it? And what do gingerbread and weight loss have in common with each other? And what does this question have to do with this snow height prediction model? To explain all this, let's first start with a small tutorial on how to build such a prediction model yourself.
Weiterlesen ...
Successful event on "AI for Teaching"
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Montag, 16. November 2020 10:14
-
Veröffentlicht: Freitag, 16. Oktober 2020 10:39
-
Geschrieben von Corina Weissbach
Thanks a lot 👏 for the numerous participants at the event on 15.10.2020 on the topic
"USE OF ARTIFICIAL INTELLIGENCE IN HIGHER EDUCATION:
CLOUD COMPUTING, DATA MANAGEMENT AND HIGH-PERFORMANCE COMPUTING"
We are pleased that so many interested listeners participated on our event 👀👂🏼
The event is available for review (only German). The slides are also available in English for download.
If you have any further questions about this topic, please contact Diese E-Mail-Adresse ist vor Spambots geschützt! Zur Anzeige muss JavaScript eingeschaltet sein!.
We would also like to organize a similar event next year. So please stay tuned! 👍

Make science come alive!
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Montag, 09. November 2020 14:48
-
Veröffentlicht: Montag, 12. Oktober 2020 07:19
-
Geschrieben von Corina Weissbach
Under the heading "How will we live in the future?" DRESDEN-concept presents current cooperative research projects and innovations in the research fields of digitalization, living, climate & water, mobility, material and cultural heritage. Climate change, demographic change, pandemics and megacities are just some of the major challenges facing our society. Scientists around the world are working to develop innovative solutions. The numerous research institutions in Dresden make an important contribution to this with their excellent research.
👉 ScaDS.AI Dresden/Leipzig participates in the exhibition by presenting our fields of research
The joint exhibition is committed to make science visible and tangible for the public and to transfer results from science into society. The alliance shows the public the importance and strength of cooperative research.
Under the umbrella of DRESDEN-concept, an alliance of 32 Dresden research and cultural institutions, scientists work on common topics across institute and subject boundaries. We are happy to be part of this great alliance! You can visit this exhibition in front of the Kulturpalast Dresden. 👀

ScaDS.AI Participation at Student Panel of DI2KG Workshop
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Dienstag, 24. November 2020 09:54
-
Veröffentlicht: Montag, 21. September 2020 15:22
-
Geschrieben von Daniel Obraczka
The 2nd International Workshop on Challenges and Experiences from Data Integration to Knowledge Graphs (DI2KG) aims to foster innovation in the fields of data integration and knowledge graph construction. Research areas that also receive high attention in ScaDS, e.g. with the FAMER framework. This is why ScaDS already participated in the first iteration of the workshop, which also resulted in a short publication that described our winning solution of last year's challenge.
Weiterlesen ...
Ray Reiter Best Paper Award for Dr. habil. Ringo Baumann, Prof. Dr. Gerhard Brewka and Dr. Markus Ulbricht
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Freitag, 04. Dezember 2020 11:20
-
Veröffentlicht: Dienstag, 15. September 2020 15:02
-
Geschrieben von Miriam Herdt
The ScaDS.AI researchers Ringo Baumann, Gerhard Brewka and Markus Ulbricht recently won the Ray Reiter Best Paper Award at the 17th International Conference on Principles of Knowledge Representation and Reasoning.
Weiterlesen ...
Honorable Mention at the Dissertation Award of the European Association for Artificial Intelligence for Dr. Markus Ulbricht
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Freitag, 04. Dezember 2020 11:23
-
Veröffentlicht: Dienstag, 15. September 2020 14:37
-
Geschrieben von Miriam Herdt
Dr. Markus Ulbricht reached an honorable mention for his PHD thesis at the Dissertation Award of the European Association for Artificial Intelligence. The thesis contributes to the research area of knowledge representation and reasoning (KR).
Weiterlesen ...
Kick-Off: New joint research project about Cyber Security (ZIM network)
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Dienstag, 15. September 2020 15:08
-
Veröffentlicht: Donnerstag, 03. September 2020 13:46
-
Geschrieben von Miriam Herdt
Today we are happy to announce the Kick-Off of a new and exiting joint research project with ScaDS.AI collaboration. Thanks to all the project partners ITPower Solutions GmbH, quapona technologies GmbH, Fraunhofer Institute for open communication systems FOKUS and Leipzig University. Our fellow researcher Martin Grimmer wrote a short sketch about his interesting new project which is attached below. For now, we wish you a successful time.
Weiterlesen ...
Virtual ScaDS.AI Summer School on AI and Big Data attracts international crowd
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Mittwoch, 12. August 2020 14:16
-
Veröffentlicht: Montag, 03. August 2020 08:26
-
Geschrieben von Miriam Herdt
Since 2015, the data science center ScaDS.AI (Center for scalable Data Services and Artificial Intelligence) and the preceding Big Data center ScaDS Dresden/Leipzig run a yearly international summer school. Its 6th edition was planned to take place for a full week in July 2020 at the Univ. of Leipzig. Due to the Covid-19 pandemic it was replaced by a virtual and more compact 2-day event that took place at July 7-8. This opened the summer school on current AI (artificial intelligence) and Big Data topics not only for a broader and more international crowd of participants but also for internationally renowned speakers. With more than 250 registrations from North and South America (USA, Ecuador, Brazil), Europe (Germany, Switzerland, Italy, Spain, Norway, France, UK, Romania, Ukraine), Asia (Russia, India, Thailand), Africa (Morocco, Turkey, Iran) and Australia the ScaDS.AI Summer School of 2020 achieved a great international outreach and better participation than in previous years with 70-100 participants.
Weiterlesen ...
Rückblick Summerschool 2019
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Mittwoch, 11. März 2020 14:36
-
Veröffentlicht: Dienstag, 27. August 2019 08:47
-
Geschrieben von René Jäkel
From August 17th to 23rd 2019 the two Germany based Big Data Competence Centers ScaDS Dresden/Leipzig and BBDC held the fifth international summer school on Big Data and Machine Learning in Dresden. This time, the summer school bridged the gap between the research fields Big Data and machine learning, with contributions from many internationally well-known experts from various fields. The highly recognized program included key notes from IBM, NVIDIA, Intel, and speakers from academia of both competence centers BBDC and ScaDS Dresden/Leipzig as well as invited speakers. The topics span a wide range of topics around large scale and data intensive computing (Big Data) and exciting new trends in machine learning, such as uncertainty quantification, distributed machine learning and architectural optimization for deep learning. Almost sixty participants could not just take part and connect to the expert, but could also contribute a poster about own research activity in a poster session and during the whole week to trigger discussions between participants. As social activity an archery tournament brought fun and a contrast into the program as well as triggered some competition among the participants. Stay in touch with us about future activities, e.g.the Big Data and AI in Business Workshop @September 19.-20. in Leipzig!
Weiterlesen ...
Rückblick Summerschool 2018
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Montag, 03. Februar 2020 13:33
-
Veröffentlicht: Donnerstag, 20. September 2018 13:03
-
Geschrieben von Karoline Kricke
Last year the BBDC Berlin and the Big Data Competence Center ScaDS Dresden/Leipzig invited to the 4th international Summer School for Big Data and Machine Learning with Hackathon (https://www.scads.de/en/summerschool-2018). From 30.06. to 06.07.2018, the University of Leipzig offered a wide-ranging program that gave the more than 80 participants from industry and research insights into new findings and challenges in dealing with very large amounts of data and machine learning and enabled a lively exchange.
As already in the year 2017 there were again exciting and current lectures and discussions on the individual topics, which the overriding topic Big Data and machine learning raises. We would like to take this opportunity to thank all speakers and participants once again for their participation in a successful event.
Speakers from well-known companies (e.g. Microsoft, neo4j, Zalando) as well as speakers from various universities (University of Munich, Politecnico di Milano, FZ Jülich and many more) reported on problems, current research points and solutions.
At the same time there was a colorful accompanying program, which invited to explore Leipzig with dragon boat trips and city tours and promoted the common exchange.
Weiterlesen ...
Big Data in Business 2017
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Donnerstag, 10. August 2017 07:20
-
Veröffentlicht: Freitag, 04. August 2017 09:05
-
Geschrieben von Lisa Winkler
Auch dieses Jahr lud der Leipziger Standort des Big Data Kompetenzzentrums ScaDS Dresden/Leipzig wieder zu einem Workshop zum Thema „Big Data in Business“ ein (http://scads.de/bidib2017). Am 15. und 16.06.2017 wurde im Felix-Klein-Hörsaal der Universität Leipzig ein breit gefächertes Programm geboten, das den über 50 Teilnehmern aus Wirtschaft und Forschung Einblicke über neue Erkenntnisse und Herausforderungen im Umgang mit sehr großen Datenmengen gab und einen regen Austausch ermöglichte.
Wie bereits im Jahr 2015 gab es wieder spannende und aktuelle Vorträge und Diskussionen zu den einzelnen Themen, die die übergeordnete Thematik Big Data mit sich bringt. An dieser Stelle möchten wir uns noch einmal ganz herzlich bei allen Referenten und Teilnehmern für ihre Beteiligung an einer gelungenen Veranstaltung bedanken.
Referenten bekannter Unternehmen (u.a. BMW Group, Immowelt AG, Huawei Technology), sowie lokale Startups, berichteten von Problemstellungen aus der Praxis und ihren Lösungen. Gleichzeitig wurde das bewährte Begleitprogramm aus dem Jahr 2015 weitergeführt, indem Wissenschaftler der Universität Leipzig Forschungsprojekte und -prototypen (z.B. Gradoop und Exploids) des Big-Data-Kompetenzzentrums vorstellten.
Weiterlesen ...
Virtual Cloud Infrastructure for Data Analysis
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Mittwoch, 02. August 2017 09:57
-
Veröffentlicht: Mittwoch, 02. August 2017 09:45
-
Geschrieben von Yedhu Sastri
Introduction
Nowadays, data analysis is one of the crucial parts in the field of science and research and in business as well. The data analysis process includes different steps and areas. These are mainly data collection, data pre-processing (checking, cleaning etc.), data analysis itself and visualization/interpretation of the results. Thereby, every single step can be realized by using a big variety of tools. Developing an efficient and powerful analysis process, especially in connection with big data, can be a technical challenge. Therefore it is of advantage, to have an infrastructure that allows testing, modifying and evaluating every single part of the analysis as well as the whole process.
The cloud structure as described in this article provides a cost-efficient and flexible platform in order to develop and evaluate complex data analysis processes. In the following article, an example of the cloud infrastructure itself is presented at first. In the second part, we demonstrate an application of the infrastructure in order to realize a data analysis task.
Weiterlesen ...
Static Publications Site-Tutorial (ORC-Schlange)
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Dienstag, 25. Juli 2017 07:56
-
Veröffentlicht: Dienstag, 25. Juli 2017 07:00
-
Geschrieben von Fabian Gärtner
Probably every research group in the world faces the problem of collecting all publications of all group members. Usually, the list publications is displayed in structured way on the group's homepage to provide an overview of the research topics and impact of the group.
The larger and older the group, the more publications are in this list and the more painful is the manual collection of the publication list. Additional features such as searching for authors, keywords, and titles, linking additional author data to the publication (such as membership periode in the group), and handling name changes turn a simple publication list in a interesting use case for big data.
A effective solution to this problem is given in this tutorial. The tutorial is written for python starters and gives an introduction in many techniques:
- Advanced features of python 3.6
- Interacting with SQLite in python
- Interacting with a REST-API in python.
- Interacting with the ORCID public API
- Reading and writing bibtex files in python
- Creating HTML of a bibtex file in python
- Filtering HTML content with javascript
Understanding basic python syntax is required for this tutorial but all advance features are explained.
The tutorial is subdivded into 8 parts. Each part introduces a technique and demonstrate the its usage for the use case. You can, thus, jump to the part of interest or follow the tutorial step by step. A full understanding of the use case can only be achieved by reading the complete tutorial.
Weiterlesen ...
Large-scale map analysis for land-use change monitoring using machine learning methods within an HPC environment
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Dienstag, 20. Juni 2017 09:43
-
Veröffentlicht: Dienstag, 20. Juni 2017 09:43
-
Geschrieben von Peter Winkler
Historical topographic maps are a valuable and often the only data source for tracking long-term land-use changes. Their availability and vast spatio-temporal coverage make these maps an important source of information for climate and earth system modeling (ESM). However, the automated retrieval of complex and compound geographical objects from these historical maps is a challenging task. To facilitate the laborious information extraction from these maps, we present a two-stage machine learning-based approach for segmenting urban land-use from gray-scale scans using only a small set of training samples. We employ a Conditional Random Field (CRF) which obtains its unary potentials from a Random Forest (RF). The method is tested using two inference algorithms. To evaluate the performance and the scalability of the approach over large amounts of data sets, we conduct parallel computing experiments within a High Performance Computing (HPC) environment at the Center for Information Services and High Performance Computing at TU Dresden. We evaluated the methodology on the first Central-European set of trigonometry-based maps (1:25000) from 1850-1940 with large spatial and temporal coverage, which makes them particularly valuable for land-use change research and historical geo-information systems (HGIS). Experimental results indicate the suitability of both, the methodological approach and its parallel implementation.
This work has been presented at the GEOBIA 2016 Conference. A conference paper has been published and is available online.
Demonstration service for binary image segmentation
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Freitag, 29. Juni 2018 11:39
-
Veröffentlicht: Dienstag, 20. Juni 2017 09:26
-
Geschrieben von Jan Frenzel
Binary image segmentation is a technique to identify various segments in a digital image. The main goal of segmentation is to enhance the information content of the image and to provide a standardized representation of the reconstructed segments. Image segmentation can be used in various ways, its applications range from low-level vision like 3D-reconstruction and motion estimation to high-level problems like image understanding and scene parsing. This demonstrator shows the applicability of this method for different raw image types to illustrate the possibly large range of application areas.
The main focus of this work was not to reach good performance in terms of e.g. pixel accuracy. Instead, we focus on usability, computational efficiency and generalization.
Usability: unlike many of the existing systems, here, the training data may be incomplete, like e.g. in GrabCut binary segmentation, where a user provides only scribbles or bounding boxes marking some pixel as the background or foreground. In doing so, usually, most of image pixels remain non-marked. Actually, we only have ground truth information for the pictures of one use-case. This ground truth information is however not used for learning but only to check the results afterwards. To summarize, we employ semi-supervised learning using a quite incomplete user information.
Computational efficiency: most of the system is implemented for processing on a GPU. For the use-cases below, the full pipeline (computing features, learning, inference etc.) takes about a couple of minutes depending on the image's size.
Generalization: for each use-case, we learn only relatively few unknown parameters. In particular, we do not learn image features. We use a pre-trained Convolutional Neural Network to do this. The default values work quite stable and sufficient for most cases.
Further Information can be found here.
Random numbers are hard - even harder in virtual machines
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Donnerstag, 08. Juni 2017 13:08
-
Veröffentlicht: Donnerstag, 08. Juni 2017 12:51
-
Geschrieben von Clemens Fritzsch
Random numbers are an essential element of cryptography and therefore security in general. Like most security aspects, random numbers sound simple but proof to be hard to get right. This text will discuss problems regarding random numbers in computers, especially in virtual machines.
Weiterlesen ...
Big Data Reference architectures: Are they really needed?
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Montag, 20. März 2017 10:17
-
Veröffentlicht: Montag, 20. März 2017 10:05
-
Geschrieben von Norman Spangenberg
Reference architectures are a key research topic in business information systems. They try to simplify software development by reusing architectural and software components. But reusability leads also to a trade-off in making reference architectures on a higher level to reuse it in many domains and applications. Or to concentrate them on a subject and hence easier to reuse.
In this blog post we discuss, whether big data references architectures are really needed. Our hypothesis is that current big data reference architectures are not sufficient to provide real benefit for implementing big data projects.
Weiterlesen ...
Big Data Frameworks on highly efficient computing infrastructures
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Freitag, 14. Juli 2017 08:04
-
Veröffentlicht: Dienstag, 21. Februar 2017 15:00
-
Geschrieben von Jan Frenzel
Introduction
Big Data is usually used as a synonym for Data Science on huge datasets and dealing with all kinds of obstacles coming with that. Having access to a large amount of data offers a high potential to find more accurate results for many research questions. Moreover, the ability to handle Big Data volumes may facilitate solutions to previously unsolved problems. However, many research groups have not the necessary facilities to run large analysis jobs using computing resources they have access to at their home institution. Furthermore, the installation, administration and maintenance of a complex and agile software stack for data analytics is often a challenging task for domain scientists. One of the key issues of the Big Data competence center ScaDS Dresden/Leipzig is therefore to provide multi-purpose data analytics frameworks for research communities, which can be used directly at the computing resources of the Center for Information Services and High Performance Computing (ZIH). Using the high performance computing (HPC) infrastructure of ZIH, ScaDS Dresden/Leipzig members and collaborating researchers can run their data analytics pipelines massively in parallel on modern hardware. The following general purpose data analytics frameworks are currently available:
Weiterlesen ...
OSTMap - Open Source Tweet Map
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Dienstag, 31. Januar 2017 09:30
-
Veröffentlicht: Montag, 30. Januar 2017 10:00
-
Geschrieben von Martin Grimmer
Introduction
It is often necessary to build a proof of concept to show the ease and feasibility of Big Data to customers / project promoters or colleagues. With OSTMap (Open Source Tweet Map) mgm partners with the ScaDS to prove that it is possible to accomplish a lot with the right choice of technologies in a short time frame.
Weiterlesen ...
Big Data Cluster in “Shared Nothing” Architecture in Leipzig
- Details
-
Kategorie: Blog
-
Zuletzt aktualisiert: Montag, 06. Februar 2017 09:35
-
Veröffentlicht: Donnerstag, 12. Januar 2017 12:05
-
Geschrieben von Lars-Peter Meyer
The Galaxy Cluster
The state of Saxony funded a notable shared nothing cluster located at the University of Leipzig and the Technical University of Dresden. Here we want to give a short overview on this new “Galaxy” cluster which is a very nice asset for ScaDS.
Shared nothing is probably the most referenced architecture when talking about big data. The idea behind this cluster architecture is to use large amounts of commodity hardware to store and analyze big amounts of data in a highly distributed, scalable and cost effective way. It is optimized for massive parallel data oriented computations using e.g. Apache Hadoop, Apache Spark or Apache Flink.
Cluster Facts Overview:
Weiterlesen ...