The ability to take data, understand it, visualize it and extract useful information from it is becoming a hugely important skill. How can you turn all those logs, histories of purchases and trades or open government data, into useful information that help your business make money?
In this talk, we’ll look at doing data science using F#. The F# language is perfectly suited for this task – type providers integrate external data directly into the language – your language suddenly _understands_ CSV, XML, JSON, REST services and other sources. The interactive development style makes it easy to explore data and test your algorithms as you’re writing them. Rich set of libraries for working with data frames, time series and for visualization gives you all the tools you need. And finally – F# easily integrates with statistical environments like R and Matlab, giving you access to the industry standard libraries.
Even the best test suites can't entirely prevent nasty surprises: race conditions, unexpected interactions, faults in distributed protocols and so on, still slip past them into production. Yet writing even more tests of the same kind quickly runs into diminishing returns. I'll talk about new automated techniques that can dramatically improve your testing, letting you focus on what your code should do, rather than which cases should be tested--with plenty of war stories from the likes of Ericsson, Volvo Cars, and Basho Technologies, to show how these new techniques really enable us to nail the hard stuff.
While Machine Learning practitioners routinely use a wide range of tools and languages, C# is conspicuously absent from that arsenal. Is .NET inadequate for Machine Learning? In this talk, I'll argue that it can be a great fit, as long as you use the right language for the job, namely F#.
F# is a functional-first language, with a concise and expressive syntax that will feel familiar to data scientists used to Python or Matlab. It combines the performance and maintainability benefits of statically typed languages, with the flexibility of Type Providers, a unique mechanism that enables seamless consumption of virtually any data source. And as a first-class .NET citizen, it interops smoothly with C#. So if you are interested in a language that can handle both flexible data exploration and the pressure of a real production system, come check out what F# has to offer!
The future of computing will be heterogeneous and the traditional tools we are used to will not be able to handle the different paradigms required when developing for these systems. This talk will provide a brief overview of heterogeneous computing and discuss how Erlang can help with the orchestration of different processing platforms, using our latest experiment on the Parallella platform as a case study.
This talk will also introduce Erlang/ALE, our new framework for embedded systems and provide an update on the Erlang Embedded project.
Talk objctives: To provide an overview of some of the research projects we have been working on at Erlang Solutions in the field of embedded and heterogeneous systems.
Target audience: Hardware and software engineers interested in computer architectures, heterogeneous computing and hardware hacking.
On the 10th of December all attendees and speakers are welcome to Get Together evening in Comedy Club (Vokieciu g. 2, Vilnius). For your convenience we arranged transportation from the venue to Comedy Club. Next to the venue at around 17:30-18:00 there will be shuttles waiting for you! Mark Rendle a professional stand-up and Meta-Ex formed by Sam Aaron and Jonathan Graham will take care of Live Coding - Live Synths - Live Music. Therefore, free beers and snacks are covered! In addition - with our partners "Visma Lietuva" we brewed speacial "Build Stuff 2013" beer!
Not only for java. Not only for social networking. Not only for big-data. Not only for math-heads. This is the non-typical graph database session where you'll learn how and why you want to start looking at graphs as a data storage abstraction for every day applications.
I used to think that databases were a boring, just a necessary evil. I'd model some cool algorithms and plug the database in later. NoSQL and NoSQL conference sessions were are even more boring. Come on folks, we aren't reinventing the wheel with key-value stores or JSON documents. Then I learned about the power of a graph database. The seductiveness of graph theory, connected data, and persistence that aligns with the way I think - functionally. My life is complete.
In this session I will introduce Neo4j and your new passion: graph thinking. No longer is data storage an afterthought.
As Scala programmers we solve a wide range of problems; from the tiniest bugfixes to the most interesting features—however no matter how flawless and well-tested and well-typed our code is, there is something that we should never forget: Reality — a place where things get FUBAR all the time — so lets talk about what can and will go wrong, and what strategies we have to deal with it; to recover; to heal our systems.
Many of us have one or more manual steps in our deploy and release processes. This leads to a lot of time spent waiting for the right people to do the job. Also, errors often occur due to steps forgotten or done incorrectly. This often leads to high walls between the testers, IT-ops and the developers.
This talk will start out with some general Continuous Delivery, the why's, where you'll get to know the actual benefits of applying Continuous Delivery and the arguments you need to be allowed to spend time on it. Then we will move over to the how's. By demonstrating how you can use the principles of Continuous Delivery to configure automated builds and one-click deployment, and how you can migrate your current manual process into an automated one. Along the way, we will be using TFS, TeamCity and OctopusDeploy and discuss how they choose to solve specific problems, as well as other ways you might handle those issues..
We should verify all the software that we build. Verification technology has made such progress that this goal is becoming realistic. Verification will no longer be restricted to life-critical, expensive systems but will be a normal component of the development process. I will present tool, method and language support for the achievement of this goal.