For this talk I joined forces with [@debe](https://github.com/debe) and we're presenting our recent work for a customer where we cut the AWS bill in half using a holisitic approach. It covered
For this talk I joined forces with [@debe](https://github.com/debe) and we're presenting our recent work for a customer where we enabled the Apache Solr search engine to search through large data sets in near constant time.
For this talk I joined forces with [@backstreetkiwi](https://www.nikolauswinter.de) and we're presenting our approach combining a shared nothing architecture with a centralized logging approach (German version).
A talk focussing on the skills needed to run an Kubernetes cluster from a developers advocate perspective.
Public cloud services have become a commodity asset in the past years. Shopping24 though is currently running all systems on pure bare metal. I’ll lay out Shopping24’s move into the Amazon AWS cloud a few years back and our recent journey back into a classic datacenter. The bottom line for doing effective DevOps in both classic and cloud infrastructure is automatization.
Logging und Monitoring sind kein sexy Thema. Niemand hat Lust, sich auf Konsolen einzuloggen und Logdateien zu durchforsten. In Docker-Umgebungen potenziert sich das Problem noch einmal. Ich zeige in dieser Session das Zusammenspiel aus Logstash und Graylog zur Access-Loganalyse, Sentry zum Exception Logging sowie Graphite und Grafana zur Visualisierung von Anwendungsmetriken.
In e-commerce, when searching over a wide range of products, sorting your search result is crucial to your business. You most likely want to sort your search result depending on the search context - like the dominating category or the user's search or order history. We built a Solr plugin to determine the current category "in flight" and change sorting and facetting of the current query without adding any overhead to the query.
Mit Hadoop und diversen Streamingframeworks (Spark, Flink) steht ein ganzer Blumenstrauß an Tools bereit, um in (fast) Echtzeit auch große Datenmengen zu analysieren. Was aber wenn die benötigte Rechenleistung in keinem Verhältnis zum Problem steht? Hier helfen moderne Datenstrukturen wie z. B. der Bloom-Filter oder der Count-Min Sketch. Diese speichern sehr große Datenmengen auf sehr kleinem Raum.