Aaron Stannard
View more
Developers are living in exciting, but more demanding times - we're expected to create applications and services that can deliver better value faster, at higher volumes, with less downtime. And in order to meet these demands we must learn new technologies and programming styles. Enter the actor model and Akka.NET.
In this talk you'll learn the fundamentals of Akka.NET and discover how you can use the power of the actor model, location transparency, clustering, and other Akka.NET concepts to build powerful, highly available systems without having to write awful boilerplate code. You'll never look at .NET the same way again afterwards.
The aim of the Internet of Things is to provide valuable information about everything around us. Citizens demand open access to know directly what happens in their cities without any intermediary. But nowadays we are impacted by countless news our sources and is very difficult to digest it.
Over-information is the new way of hiding information. If we demand context and facts instead of dumb numbers, the biggest legacy of the internet of things will be a world that is more transparent and democratic. Smart solutions also improve citizens daily lives by controlling pollution levels, providing alerts against medical epidemics or managing cities' traffic.
Libelium technology has been applied worldwide. During Fukushima crisis after the tsunami in 2011 we developed a sensor board to measure radiation. As a result, a series of boards were shipped at no charge to the Tokyo Hackerspace and other working groups in Japan to allow citizens monitoring autonomously radiation levels. With MySignals, an eHealth development platform, we want to cover one of the main challenges of the century: enhancing the universal accessibility to a healthcare system for more than 2 billion people worldwide.
Even the best, biggest, beachiest data out there is useless if users can't easily search and analyze it. Under the right circumstances, a custom query language can be a powerful interface to that data, but only if that interface is chosen and developed consciously, with top priority given to creating a fitting domain abstraction, a first-class user experience, and a simple yet flexible implementation that doesn't reinvent the wheel.
These are takeaways from the real-world experiences of ÜberResearch and Valo: two different companies with very different needs, which nevertheless ended up taking similar approaches to the selection and creation of query languages as data interfaces. From the lessons they've learned -- some more painfully than others -- we'll construct a roadmap for choosing, designing, and implementing a custom query language that lets your users interact with your big, beautiful data in all its glory.
Most of the IoT devices are running a Linux distribution, but without a clear updates and/or security strategy.
In this talk we will go through some of the current problems the IoT devices are facing and tools and strategies we can use today to make the situation a bit better for new devices, while keeping our time to market optimized.
We will show some features in Linux and systemd that can help improving the security of these devices. We will also introduce snaps, a packaging format that helps distribute your application and install it isolated from the underlying system and from other applications; and Ubuntu Core, a small, transactional version of Ubuntu for IoT devices, based on snaps.
Boaz (@bx) is the tech lead for key-value storage and one of the original engineers on the Manhattan team at Twitter. His work at Twitter has been primarily focused around building a distributed database from the ground up to support one of the highest traffic websites on the internet.
Distributed databases are complex systems. Unlike many other types of services, databases and stateful systems in general have many constraints–some obvious and some not so–that must be respected during operations for the system to maintain correctness by not losing or lying about data. An ideal system allows you to operate on it or move data around (e.g. by adding or removing nodes) without sacrificing availability or performance any more than necessary.
Because of these complexities, distributed storage systems are often a huge pain to manage. Manhattan, Twitter’s primary key-value store, operates at a large scale with multiple configurations and capabilities that require careful and subtly different orchestration while managing the cluster, but through a thoughtful and iterative approach to cluster operations we’ve been able to make it relatively easy and pleasant to run.
This talk will cover some of the reasons why managing stateful systems is hard, including managing availability, data movement and scale, how distributed storage systems generally do it with concrete examples from Twitter’s Manhattan, and the importance of generalized infrastructure for maintaining the sanity of your team members.
Caitie McCaffrey is a Backend Brat and Distributed Systems Diva at Twitter. Prior to that she built large scale services and systems that power the entertainment industry at 343 Industries, Microsoft Game Studios, and HBO. While at 343 Industries she partnered with the eXtreme Computing Group in Microsoft Research to productionize Orleans as part of her work rewriting the Halo Services.
Microservices have become the defacto architecture pattern for building services. However separating business logic into small services that operate with a single logical data set has introduced consistency challenges. Previous attempts to solve this problem like two phase commit have not been widely adopted due to availability and liveness issues.
Instead developers implement feral concurrency control mechanism. This technique can be error prone, and often results in “Death Star” architectures which rely on chained calls to enforce application invariants. These architectures become more complicated over time, and are difficult to modify and extend, and often don't correctly handle all failure scenarios.
In this talk I propose a new solution for this problem, Distributed Sagas, a protocol for coordinating requests among multiple micro services, while ensuring application invariants.
Every engineer finds themselves, at some point, in a system they want to rewrite. Often we need to take a large legacy monolith and move it to a distributed architecture, for purposes of scaling. This talk will discuss some of the challenges encountered when attempting rewrite a complex system, the key strategies for success, and the potential unexpected outcomes of such a project.
Christopher Meiklejohn loves distributed systems and programming languages. Previously, Christopher worked at Basho Technologies, Inc.
on the distributed key-value store, Riak. Christopher develops a programming language for distributed computation, called Lasp.
Christopher is currently a Ph.D. student at the Université Catholique de Louvain in Belgium.
The CAP theorem points to unavoidable tradeoffs between consistency and availability when the network can partition. This decision heavily impacts system performance and cost.
Current database design forces application developers to decide early in the design cycle, and once and for all, where they sit in this spectrum. At one extreme, strong consistency, as in Spanner or CockroachDB, requires frequent global coordination; restricting concurrency in this way greatly simplifies application development, but it reduces availability and increases latency. At the opposite extreme, systems such as Riak or Cassandra provide eventual consistency only: they never sacrifice availability, but application developers must write code to deal with all sorts of concurrency anomalies in order to prevent violation of application invariants.
However, a system only needs to be consistent enough for the application to remain correct. We propose a unique middle ground, Just-Right Consistency (JRC), composed of various techniques that do not sacrifice availability, unless provably required for the application to execute correctly.
We overview JRC, and present an open-source cloud-scale database built for it, Antidote. Antidote stores Conflict-Free Replicated Data Types (CRDTs) under Transactional Causal Consistency (TCC), the strongest model that does not compromise availability. Optionally, a transaction can be ACID, but Antidote keeps availability high by moving the required coordination outside the common path. Finally, we leverage research tools that help developers use ACID properties selectively, only when necessary for correctness.
A popular pattern today is the injection of declarative (or functional) mini-languages into general purpose host languages. Years ago, this is what LINQ for C# was all about. Now there are many more examples such as the Spark or Beam APIs for Java and Scala. The opposite embedding is also possible: start with a declarative (or functional) language as the outer host and then embed a general purpose language. This is the path we took for Scope years ago (Scope is a Microsoft-internal big data analytics language) and have recently shipped as U-SQL. In this case, the host language is close to T-SQL (Transact SQL is Microsoft’s SQL language for SQL Server and Azure SQL DB) and the embedded language is C#. By embedding the general purpose language in a declarative language, we enable all-of-program (not just all-of stage) optimization, parallelization, and scheduling. The resulting jobs can flexibly scale to leverage thousands of machines.
Danielle Ashley is a Scala enthusiast working at Underscore. She came into software development through a rather indirect career path. Currently absorbed in the possibilities of functional programming and its relationship with other approaches.
In this talk we describe an underappreciated tool, Church encoding, that allows us to combine the best parts of FP and OO.
By Church encoding our program we can reatin the simple semantics that characterises FP code, while achieving performance that may seem out of reach in a pure FP system.
Late last year Maana, a Seattle based enterprise knowledge platform startup, contracted us to write a time series analysis engine.
They commonly dealt with multi-TB data, but needed to achieve interactive speed.
We recognised that providing a streaming API, similar to Monix, Akka Streams, or Reactive Extensions, would make the software accessible to data scientists already used to Spark, but there were issues about semantics and performance.
Classic FP pull-based systems are simple to use but perform poorly, while OO push-based systems are fast but tricky to reason about.
By employing Church encoding, also known as refunctionalisation, we were able to get the best of both worlds.
The user sees a straightforward API and semantics, while under the hood the system has no runtime memory allocation and is extremely efficient.
This tool is not so widely known and the purpose of our talk is to introduce it to a wider audience.
Church encoding is a general purpose tool you can apply to your own code no matter what software you build.
It provides a relationship between the classic FP tool of algebraic data types (represented in Scala using `sealed` traits) and OO-style classes.
We can use it to convert FP-style code into an OO equivalent, which can use mutable state and other optimisations without affecting the clean semantics the user sees.
Church encoding also gives us a coherent design principle to unite FP and OO.
This provides a bridge to truly unlocking Scala's multiparadigm nature while retaining an overall architecture that is simple and consistent.
David Pilato is a Developer | Evangelist at Elastic and creator of the Elastic French Speakers User Group. Frequent speaker about all things Elastic, in conferences, for User Groups and in companies with BBL talks. In my free time, I enjoy coding and DJ four times per year, just for fun. Living with my family in Cergy, France.
Dharma Shukla (@dharmashukla) is a Distinguished Engineer at Microsoft. Dharma is also the founder of a globally distributed, multi-tenant database service on Azure. Prior to working on the current system, his work spanned a range of distributed systems and databases at Microsoft and other places.
Dharma will describe the internals of the system design and various design trade-offs they had to make in the process of building Azure Cosmos DB service. He will also share his experiences from operating a globally distributed database service worldwide and maintaining comprehensive Service Level Agreements (SLAs).
Duarte Nunes is a Software Engineer working on ScyllaDB. He has a background in concurrent programming, distributed systems and low-latency software. Prior to ScyllaDB, he was involved in distributed network virtualization.
ScyllaDB is a NoSQL database compatible with Apache Cassandra, distinguishing itself by supporting millions of operations per second, per node, with predictably low latency, on similar hardware.
Achieving such speed requires a great deal of diligent, deliberate mechanical sympathy: ScyllaDB employs a totally asynchronous, share-nothing programming model, relies on its own memory allocators, and meticulously schedules all its IO requests.
In this talk we will go over the low-level details of all the techniques involved - from a log-structured memory allocator to an advanced cache design -, covering how they are implemented and how they fully utilize the hardware resources they target.
Edson Yanaga is focused on empowering developers worldwide to deliver better software faster and reliably. He has been nominated as a Java Champion and currently is also a Microsoft MVP. Professor, author, blogger and frequent speaker in many international conferences talking about Java, Cloud Computing, DevOps, Microservices and Software Craftsmanship.
The “deploy moment” is an occasion that still gives many developers the shivers. But it shouldn’t be this way (at least not every time). Luckily enough, we have tools and processes today that enable us to turn the deploy moment into an usual activity. Gone where the days when we had to automate everything by hand. Today we have Kubernetes, OpenShift, and Fabric8 to automate out-of-box many different scenarios. And if we need even more advanced scenarios we can use tools like Zuul and FF4J to solve the cumbersome parts for us.
Come and join this live demo session with lots of different deployment scenarios: from basic to advanced. We'll cover the basic deployment concepts and then we'll dig into demoing how can we use open source tools like Kubernetes, OpenShift, Fabric8, Zuul and FF4J to master the deployment art. “
Eric Ladizinsky is a senior scientific management executive with a strong background in physics, engineering, materials, manufacturing and team building. Mr. Ladizinsky leads D-Wave's technical effort to develop the superconducting integrated circuit fabrication process and is often called upon to evangelize on all aspects of quantum computing. At Northrop Grumman Space Technology, he ran a multi-million dollar DARPA program in Quantum Computing using superconducting integrated circuit technology. Mr. Ladizinsky has a BSc. Physics and Mathematics degree from UCLA and is an Adjunct Professor of Physics at Loyola Marymount University.
At key points in human history, civilization took a leap forward because people discovered new ways of harnessing nature. Quantum computers, by harnessing an immense, usually hidden reality, promise unimaginable computing power if realized at scale … dramatically impacting our ability to solve the complex problems our civilization urgently needs to solve.
D-Wave Systems has been rapidly evolving commercial quantum computing systems .. (being explored by Google, NASA, Lockheed, USC, Los Alamos and others) that are showing signs of being at a “tipping point” .. matching state of the art solvers for some problems and sometimes dramatically exceeding them… portending the exciting possibility that in just a few short years D-Wave processors could exceed the capabilities of any existing or foreseeable classical computers in the areas of machine learning, AI and optimization.
This lecture will describe the basic ideas behind quantum computation , Dwave’s unique approach, , the current status and future development of D-Wave’s processors and how future quantum computers could be used to solve our most pressing problems.
Fernando is a technical consultant at Oracle in Málaga working with SOA, Microservices and DevOps related technologies.
Before joining Oracle he worked as a Middleware Project Manager in the UK and as a SOA developer for different international customers in Portugal, Cape Verde Islands and East Timor.
Francesco joined the Workshop in 2011 and, as an Architect, has reshaped the company’s Fintech solutions. Today he leads a distributed engineering department whose mission is to build the best gaming platform around. He is passionate about large scale agile transformation and how organisational change supports building next-generation products.
Implementing Big Data solutions is commonly regarded as a technology challenge. This is definitely true given the ever-increasing array of options that we, as engineers, can rely on.
This talk will discuss the other, less visible obstacles that teams and organizations face when dealing with a large-scale rollout of new data products.
Galder Zamarreño is a core R&D engineer at JBoss, a division of Red Hat. He is one of the founding engineers of Infinispan, Red Hat’s distributed, in-memory key-value store and he currently spends most of his time developing Infinispan’s Functional Map API as well as other data grid and caching functionality. He is very keen on functional programming and has been developing Scala since 2009. Galder has previously worked with JBoss customers helping them build highly distributed and massively scalable Application Server clusters based on technologies such as JGroups and JBoss Cache. Prior to joining Red Hat, Galder worked in the Retail industry where he was a software developer involved in the development of an EFT software switch solution based on JBoss technologies. The love for distributed systems and open source software comes from his days at ESIDE faculty at University of Deusto (Bilbao, Spain) where he studied a master’s degree in Computer Science.
Developers aim to write responsive, scalable, fault tolerant, reactive applications to can handle the business needs of the modern web applications without hiccups. This talk shows you how to do just that! And to add a twist, we will do it in a (pure) functional style.
Node.js is a very popular framework for developing asynchronous, event-driven, reactive applications. Infinispan, an in-memory distributed data grid designed for fast, scalable, elastic access to large volumes of data, has recently gained compatibility with the Node.js ecosystem enabling reactive applications to use it as persistence layer. When combined with Elm, a functional programming language for declaratively creating reactive web applications, these technologies offer a great platform for working with highly responsive, data-heavy applications seamlessly. In this live-coding talk, we will demonstrate how to use these technologies to build a reactive web application composed of an Elm frontend, a Node.js microservice layer and a scalable, fault tolerant Infinispan data grid for persistence.
Networking and security management, based on firewall policy configuration operation, has historically been very difficult because of the high complexity of networks, and the diversity of the different firewall vendors. Thus, while DevOps are performing agile server configuration management, the firewall rules that define the application connectivity are still managed in the old-school way, introducing a bottleneck into the software and infrastructure delivery pipeline.
In this talk, we'll show how to abstract, describe and attach the application connectivity description (policies) to the infrastructure specification as high-level intents into a multi-vendor and multi-technology network (in-premise, cloud-based, PaaS-based, etc.) ensuring continuous compliance and that all potential problems are detected before the policies are provisioned into the infrastructure.
Inés Sombra is a Distributed Systems Engineer at Fastly, where she spends her time helping the Web go faster. Ines holds an M.S. in Computology with an emphasis on Cheesy 80’s Rock Ballads. She has a fondness for steak, fernet, and a pug named Gordo. In a previous life she was a Data Engineer.
Centralized applications are easy. Your entire system lives in one physical location and you can reason about, vertically scale, and manage your system without a lot of friction. Unfortunately none of us build applications this way anymore. Our systems are distributed, have external dependencies, and may even have to be geographically redundant.
Dealing with distribution is a must at Fastly where our applications are deployed all over the world and must be highly performant and resilient. But there are some inherent challenges related to designing and building systems that scale. In this talk we’ll go over the key lessons we learned while building our Image Optimization service. What worked, what didn’t, the tradeoffs we made, and what can you do as a systems engineer to learn from our experiences while building your own applications.
Part of the Product Development organisation, James is responsible for driving strategy and adoption of the Oracle Cloud Platform across EMEA for AppDev and Integration. James has spent the past few decades building apps, automating their deployment lifecycles and continuously learning more every day from experience and mistakes about how to deliver a better application, more quickly with repeatable results.
Developing and deploying multiple releases per day, or per hour is a challenge, especially when building the infrastructure as code, as well as the rest of the app. Architectural designs have changed from monolithic applications to distributed microservices with fine granularity, isolated deployment and lightweight protocols and engaging customers over more channels than ever, from mobile, through chatbots to virtual and augmented reality. An API first approach is critical to tying all this together, allowing cloud native applications to access data and processes, enabling collaboration between front-end and back-end developers. Microservices and chatbots are driving a real need for all enterprises to adopt an API-first strategy to deliver great features faster.
This talk will cover the problems currently with why applications are not being sandboxed to lessen the attack surface. Mostly this is based upon the existing tools being not user friendly and requiring a low level knowledge of syscalls that is hard to find in application developers.
Seccomp is one of these tools. It defines syscall filters that allow an application to define what syscalls it allows or denies. It is commonly used in the highly-regarded Chrome sandbox.
Integrating things like seccomp filters into programming languages at build time could allow for creating a perfect set of filters based off the application code. In practice, some try to mock this behavior at runtime but it often fails due to certain functions not being called during testing and missing specific syscalls. Therefore causing the user to turn it off completely. By integrating it into the code at build time we can ensure that all the syscalls are accounted for.
This talk will also show a proof of concept with this in Golang.
Jörg Schad is a software engineer at Mesosphere in Hamburg where he works on the Apache Mesos project. Prior to this he worked on SAP Hana and in the Information Systems Group at Saarland University. His passions are distributed (database) systems, data analytics, and distributed algorithms.
Experienced Java & Mobile Developer, designed many technology solution for customers and a Cloud go-to guy in and outside Spain.
Kiki is a Lightbend Enterprise Architect and Emerging Tech & Innovation enthusiast with a passion for building large-scale, Reactive Systems. Kiki has extensive delivery experience using Lightbend Reactive Platform in range of industries, from digital commerce and high tech media to hospitality and retail. In her other life, Kiki creates technological solutions to battle human trafficking.
Event Sourcing and CQRS are the new buzz words for a while now. Driven by the modernization needs of old monolithic applications, the industry's march towards more modular applications through microservices seems unstoppable. But you don't have to use latest buzzy microservices frameworks to build rock solid and modular applications. You can also use proven technology like Akka. This talk gives an overview about event sourcing and how to achieve this with Akka and Java 8. You'll learn how CQRS fits into the puzzle and what other technologies are there to help you build state of the art applications.
We build distributed systems to be fault-tolerant: to store data, to deliver messages, even when their underlying nodes and networks fail. Over the last three years, I've introduced distributed databases, queues, and coordination services to network partitions, clock skew, and crashes--and found that many don't live up to their claims. I'd like to share my findings with you, and offer some tips on how to verify your own systems.
Linus is a technical consultant at Oracle in Malaga working with DevOps & Cloud Native related technologies and services. Prior to joining Oracle, Linus worked as a full stack developer and cloud architect consultant for Sony Mobile.
Luis Estrada is the Global CEO and co-founder at BI Geek, which he founded in 2014. BI Geek is an Business Intelligence and Big Data niche firm that runs in 3 countries -USA, Spain and Mexico- specialized in designing and developing robust data solutions. At the age of 36, Luis is a graduate of Computer Science from Universidad Pontificia de Comillas (ICAI-ICADE). He has 15 years of experience in Financial Risks in IT environments and has led several international projects in Tier 1 institutions.
2getherbank is an online bank which relies on blockchain technology and strong API integrations with other Fintechs. We have developed an informational system based on Big Data, with the goal to store and process huge amounts of heterogeneous data. Thanks to all the stored information, 2getherbank has implemented complex models which allow to predict customer behavior and infer their credit quality, recommend financial products, predict future cash flows and analyze AML and fraud patterns.
Hacker, tinkler and a Principal Software Engineer at Capital One. He loves bots, drones and all awesomely clever things. If things are not that clever, he make them clever. That basically sums up his day job. Outside of hours, he spends time reading, researching and learning new things. Oh and sometimes cooking awesome dishes because why not!
He strongly believes in empowering people to do great things. That is why he does brown bags, mentor grads/interns at his day job and run a youtube channel doing tutorials after hours.
Ever wondered what goes behind a good chat bot? Do you want to build one? Well this workshop is for you!
Complexity can be diffused as soon as you peek behind the curtain. A chat bot, like a website might look complex from the outside, but the reality is different.
This workshop will be divided into three sections. The first section will cover the basic anatomy of a chat bot. Next, we’ll work together to build a chat bot using a JavaScript framework called Talkify. We’ll start simple with a chat bot that’ll tell us knock knock and chuck norris jokes. Towards the end of the workshop, we’ll go over some advanced concepts that can help us scale our chat bot to have more features.
So come! Join us on an awesome journey to talkify our bots!
Martin is a Java Champion with over 2 decades of experience building complex and high-performance computing systems. He is most recently known for his work on Aeron and SBE. Previously at LMAX he was the co-founder and CTO when he created the Disruptor. Prior to LMAX Martin worked for Betfair, three different content companies wrestling with the world largest product catalogues, and was a lead on some of the most significant C++ and Java systems of the 1990s in the automotive and finance domains.
Common wisdom dictates that native languages are the only means of building high-performance applications. How do managed runtimes such as those available to .NET, Java, and even JavaScript, yes even JavaScript compare? Many applications requiring high-performance are now developed for managed runtimes - such as financial trading, data stores and analytics, messaging processing, and even supercomputing. Over the last few decades we have seen significant advances in managed runtimes, particularly for JIT compilers and garbage collectors. In this talk we will explore how our managed runtimes can equal, and even better in some cases, the performance of native languages.
Mathias Brandewinder has been developing software for about 10 years, and loving every minute of it, except maybe for a few release days. His language of choice was C#, until he discovered F# and fell in love with it. He enjoys arguing about code and how to make it better, and gets very excited when discussing TDD or functional programming. His other professional interests include machine learning and applied math. Mathias is a Microsoft F# MVP, author of "Machine Learning Projects for .NET Developers" (Apress), and the founder of Clear Lines Consulting. He is based in San Francisco, blogs at www.brandewinder.com
Just like traditional applications development, machine learning involves writing code. One aspect where the two differ is the workflow. While software development follows a fairly linear process (design, develop, and deploy a feature), machine learning is a different beast. You work on a single feature, which is never 100% complete. You constantly run experiments, and re-design your model in depth at a rapid pace. Traditional tests are entirely useless. Validating whether you are on the right track takes minutes, if not hours.
In this talk, we will take the example of a Machine Learning competition we recently participated in, the Kaggle Home Depot competition, to illustrate what "doing Machine Learning" looks like. We will explain the challenges we faced, and how we tackled them, setting up a harness to easily create and run experiments, while keeping our sanity. We will also draw comparisons with traditional software development, and highlight how some ideas translate from one context to the other, adapted to different constraints.
Max Neunhöffer is a mathematician turned database developer. In his academic career he has worked for 16 years on the development and implementation of new algorithms in computer algebra. During this time he has juggled a lot with mathematical big data like group orbits containing trillions of points. Recently he has returned from St. Andrews to Germany, has shifted his focus to NoSQL databases, and now helps to develop ArangoDB. He has spoken at international conferences including Strata London or MesosCon Seattle.
What we see in the modern data store world is a race between different approaches to achieve a distributed and resilient storage of data. Every application needs a stateful layer which holds the data. There are at least three necessary components which are everything else than trivial to combine, and, of course, even more challenging when heading for an acceptable performance.
Over the past years there has been significant progress in both the science and practical implementations of such data stores. In his talk Max Neunhoeffer will introduce the audience to some of the needed ingredients, address the difficulties of their interplay and show four modern approaches of distributed open-source data stores (ArangoDB, Cassandra, Cockroach and RethinkDB).
Pablo Musa is an Education Engineer at Elastic. He holds a MSc. in computer science and has more than 10 years experience in the Internet software industry. In the past years, Pablo developed and managed large Hadoop and Elasticsearch clusters and also helped different companies solve complex problems related to search, data analysis, and monitoring. Pablo's current role blends his love for teaching, elasticsearch, and building great software.
Monitoring an entire application is not a simple task, but with the right tools it is not a hard task either. However, events like Black Friday can push your application to the limit, and even cause crashes. As the system is stressed, it generates a lot more logs, which may crash the monitoring system as well. In this talk I will walk through the best practices when using the Elastic Stack to centralize and monitor your logs. I will also share some tricks to help you with the huge increase of traffic typical in Black Fridays.
Topics include:
Takeaway: best practices when building a monitoring system with the Elastic Stack, advanced tuning to optimize and increase event ingestion performance.
Paul is a Technical Architect working at Piksel and is responsible for the architecture of our flow products and services delivered by the compute and media services team here in Málaga. He has 10 years of development experience working in the UK and Spain, most recently focused on cluster management and stream processing solutions, and is interested in stream processing as a general model for computation.
Mesos and its container schedulers provide a powerful platform for cloud based applications of all types. In this session, we give practical advice for running and securing large batch, streaming and service workloads (e.g. Flink) within Mesos, and describe some of the more popular schedulers including Marathon and Aurora.
With a Computer Science background, Ravi started his professional career as a .NET developer. In 2013, he joined Oracle in Malaga as a Middleware Consultant. He currently focuses on Oracle PaaS offerings, ranging from Integration, Mobile and Internet of Things.
As CEO at Skipjaq, Rob Harrop leads a team working on the cutting edge of machine-driven performance optimisation. When he’s not thinking about how best to tune the myriad workloads encountered by Skipjaq customers, he’s thinking hard about how to pass the optimisation burden on to machines that learn. Rob is well known as a co-founder of SpringSource, the software company behind the wildly-successful Spring Framework. At SpringSource he was a core contributor to the Spring Framework and led the team that built dm Server (now Eclipse Virgo). Prior to SpringSource, Rob was (at the age of 19) co-founder and CTO at Cake Solutions, a boutique consultancy in Manchester, UK. A respected author, speaker and teacher, Rob writes and talks frequently about large-scale systems, cloud architecture and functional programming. His published works include the highly-popular Spring Framework reference “Pro Spring”.
An accurate understanding of how our systems perform is critical for ensuring good customer service, effective capacity planning and managing the process of optimisation.
Sadly, it's all too rare to see good practice when it comes to analysing and testing the performance of systems. In this talk, we see how to approach performance analysis scientifically.
We’ll discuss how to design, construct, execute, verify and analyse performance experiments to answer these four important questions:
Attendees will learn how to collate and process large timeseries datausing InfluxDB. We'll see how to monitor experiments as they execute,how to analyse the results of each experiment and how to compare results across experiments.
After earning a PhD in high-energy particle physics and while working as a systems engineer in the space business, Roland came in contact with the Akka project. He started contributing in 2010 and has been employed by Lightbend since 2011 where he has been leading the Akka project since November 2012.
Cloud computing, reactive systems, microservices: distributed programming has become the norm. But while the shift to loosely coupled message-based systems has manifest benefits in terms of resilience and elasticity, our tools for ensuring correct behavior has not grown at the same pace. Statically typed languages like Java and Scala allow us to exclude large classes of programming errors before the first test is run. Unfortunately, these guarantees are limited to the local behavior within a single process, the compiler cannot tell us that we are sending the wrong JSON structure to a given web service. Therefore distribution comes at the cost of having to write large test suites, with timing-dependent non-determinism.
In this presentation we take a first peek at ways out of this dilemma. The principles are demonstrated on the simplest distributed system: Actors. We show how parameterized ActorRefs à la Akka Typed together with effect tracking similar to HLists can help us define what an Actor can and cannot do during its lifetime—and have the compiler yell at us when we do it wrong.
Head at Moebio Labs and Chief Data Officer at Drumwave. Mathematician, data scientist, interactive visualization developer, inventor. Creates and develops highly innovative and interactive projects for the web.
Specialized in exploratory information visualization, knowledge maps and visual data science (fusion of machine learning and interaction).
Cognition plays two roles for machine learning: the model to imitate and the target. Human intelligence and all what we’ve learned about our brains and our minds have provided powerful models for producing labeling, classification, regression and prediction. On the other hand, the staggering amount of data available, and the models we use to extract information from them, has to interact somehow with humans in order to make it insightful and useful.
Understanding human perception and cognition is then instrumental to create environments in which models, machines and humans collaborate. Dynamic data visualization is a new science that goes beyond interactive data visualization, aimed to produce deep, dynamic and productive feedback-loop relationships between huge datasets and humans, mediated by sensors, storage devices, distributed computation, models, human perception and cultural and cognitive capabilities.
Maps have a long history of working as storytelling devices that place not only geographical information, but human and emotional information in space. Resolving social issues through spatial analysis is a tried and true method. The development of sophisticated methods has enabled professionals regardless of background or expertise to solve the complex problems of today's world. As such, for the first time, we are seeing the true meaning of the phrase "Everything happens somewhere".
Stephan Ewen is PMC member of Apache Flink and co-founder and CTO of data Artisans. He believes that stream processing is the next step both for data analysis and for building end to end applications as continuous data flows.
Before founding data Artisans, Stephan was leading the development that led to the creation of Apache Flink during his Ph.D. and worked on databases at IBM and Microsoft.
Stream processing has been traditionally associated with realtime analytics. Modern stream processors, like Apache Flink, however, go far beyond that and give us a new approach to build applications and services as a whole.
This talk shows how to build applications on *data streams*, *state*, and *snaphots* (point-in-time views of application state) using Apache Flink. Rather than separating computation (application) and state (database), Flink manages the application logic and state as a tight pair and uses snapshots for consistent view onto the application and its state. With features like Flink's queryable state, the stream processor and database effectively become one.
This application pattern has many interesting properties: Aside from having fewer moving parts, it supports very high event rates because of its tight integration between computation and state, and its simple concurrency and recovery model. At the same time, it exposes a powerful consistency model, allows for seamless forking/updating/rollback of online applications, generalizes across historic and real-time data, and easily incorporates event time semantics and handling of late data. Finally, it allows applications to be defined in an easy way via streaming SQL.
After passing through all the development roles from developer to project manager, Stephane moved to Oracle where he is supporting Customers in their digital transformation and cloud adoption.
Stephen is a systems engineer and DevOps advocate.He is currently working as the infrastructure architect for Piksel product division, which is building a multi-tenant video delivery and processing platform.
His work is focused on building resilient, scalable, distributed systems.
Mesos and its container schedulers provide a powerful platform for cloud based applications of all types. In this session, we give practical advice for running and securing large batch, streaming and service workloads (e.g. Flink) within Mesos, and describe some of the more popular schedulers including Marathon and Aurora.
The feature we always hear about whenever Java 9 is in the news is Jigsaw, modularity. But this doesn't scratch the same developer itch that Java 8's lambdas and streams did, and we're left with a vague sensation that the next version might not be that interesting.
Java 9 actually has a lot of great additions and changes to make development a bit nicer. These features can't be lumped under an umbrella term like Java 8's lambdas and streams, the changes are scattered throughout the APIs and language features that we regularly use.
In this presentation Trisha will show, via live coding:
Along the way we'll bump into other Java 9 features, including some of the additions to interfaces and changes to deprecation. We’ll see that once you start using Java 9, you can't go back to Before.
The world of big data involves an ever changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam (incubating) aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms.
In this talk, I will:
Founder of Globalcode, the largest Brazilian educational center specialized in software development. Senger has presented more than 200 talks about Software Development, Java, Java EE and open-source hardware. His project jHome, home automation API based on Java EE, won the Duke's Choice Award 2011 and nowadays he is working on putting Java and open-source hardware together.
Java Champion, Director of SouJava and an alternate representative of the group on the JCP Executive Committee. Co-founder and director of Globalcode and The Developer's Conference. In 2011, she was recipient of the Duke Choice's Award, for the JHome embedded environment.