Proposals

Every abstract is visible to everyone, in order to encourage discussion.

There may be a delay between submitting a talk and the talk appearing below.

Psykologisk trygghet i praksis (WIP)

Experience report 30 min - Suggested by Jørgen Landsnes

Mange har fått med seg at psykologisk trygghet er en av de mest virkningsfulle faktorene når det kommer til prestasjoner i teams.

Psykologisk trygghet kan være vanskelig å få et grep om. Én (av flere) definisjoner av psykologisk trygghet som jeg liker er: Å være seg selv uten frykt for negative konsekvenser over sitt selvbilde, status eller karriere.

Dette er ikke så lett å måle, er det vel? Så hvordan kan vi da oppnå psykologisk trygghet? Jeg vet ikke helt, men her vil jeg i hvert fall dele noen av mine helt konkrete tips som jeg bruker for å bidra til psykologisk trygghet i teamene jeg jobber i.

end to end - Value Stream mapping and optimizationzation

Workshop 1.5 hour - Suggested by Hussam Ahmad

Effective value creation based on software development depends on optimized flow of value. Some of the concepts we have learned throughout the years are listed in the 7 wastes in Software Development by Mary and Tom Poppendieck. How can you transfer this learning into actual steps you can implement one by one to improve your “value stream”. In this workshop we will be working on a case where you can do the whole process of both mapping and adjusting a value stream.

The Escher School of Fish

Workshop 3 hours - Suggested by Einar W. Høst

This workshop is based on the classic 1982 paper “Functional Geometry” by Peter Henderson. The paper shows the decomposition and reconstruction of Escher’s woodcutting “Square Limit”, a beautiful recursive tessellation of interleaving fish, using functional programming. We will use JavaScript as our implementation language as we follow in Henderson’s footsteps to create our own Square Limit as an SVG. We’ll see that framing a problem in the right way enables us to solve it in interesting and elegant ways. The problem in this case is the transformation and combination of pictures to form new and more complex pictures. If we think of an picture not as a collection of colored pixels but rather as a function from a bounding rectangle to a rendering, we can define simple yet powerful picture combinators that allow us accomplish our task with ease and elegance.

You do not have to be an experienced JavaScript programmer to follow this workshop. The tool requirements are minimal. All you need is an editor and a browser.

A Tester’s Guide to Quality

Experience report 30 min - Suggested by Mesut Durukal

After I was assigned to my new project, I realized that the quality assurance processes were not mature and we could introduce several initiatives to improve the quality (not only in the product, but also in our development activities). I will share my story which ended up in a successfully built quality.

With correct steps, we can see how quickly a quality process can be developed. But what is a quality process and how can we ensure quality? Does quality assurance equal to running tests? As you can guess, the answer is no. So we will talk about different aspects of quality assurance. By means of the initiatives I have introduced, we experienced not only the quality of the product, but also the efficiency of the agile ceremonies and visibility of information. We built a close working approach with Dev and Product teams to encourage early QA involvement and find ways to cope with testability issues.

Outcome While initially, there was: No track of bugs No feedback cycle on tests No transparency of the progress No organized feature No metrics, no idea about coverage Escaped bugs Massive manual effort

After talking to product owners, development members and project leaders; I have initiated a number of studies. It was not a piece of cake since you have to act like a quality coach, be dedicated and encourage people. A few of the initiatives were Feature Housekeeping, Short and very understandable progress reports, Pulling QA in Early Stages and Bugs Management initiative.

A summary of the achievements are as follows: Features are organized and prioritized based on Criticality (Impact) and Probability (Visibility). Test cases are defined and documented. A Traceability matrix is generated from tests to features. Cases which are not testable are revealed, a testability improvement initiative is started. A test automation framework is built supporting re-usable automated tests. Locators are commonized across all sites by collaborative work with dev teams. Defined test cases are implemented to be executed on the CI pipelines. Test suites are generated based on priorities and Nightly jobs are scheduled. All results are reported and tracked publicly on Dashboards and Flaky tests are fixed. Test framework is used by several teams. Progressive metrics are tracked, traceability matrix is created, coverage is measured. Early QA involvement is encouraged by watching development tickets from T0.

Turning weaknesses into Strengths while Developing

Experience report 30 min - Suggested by Mesut Durukal

Just as in daily life, we can struggle and even fail while developing our products. But the most important thing is not only to stand up after failing, but also to critique the reasons or parameters of the failure. If we learn from the failure, then it will not be a downfall, but an improvement. Let’s discuss how we can learn from product failures?

Common failures: Escaped bugs Incidents happening in the dependencies which affects our system Availability issues in production Flaky tests Issues in different environments

Lessons learnt from failures: Communication, Testability, Coverage Chaos testing Monitoring (in prod) Code quality Compatibility testing

Isn’t Test Automation a Silver Bullet?

Experience report 30 min - Suggested by Mesut Durukal

Like testing is an essential part of software development lifecycle, automation is an non-negligible part of testing. Nowadays, most of us are somehow involved in automation, since it helps us to perform continuous testing and minimizes manual effort. So it sounds like it is a silver bullet. But is it? What are the biggest pitfalls of test automation, let’s discuss by going over real life experiences.

Motivation: As we all know, automated testing is a great way to reduce manual effort since it replaces the execution of tests by a human tester. So, we can eliminate not only a huge need for manpower, but also time and cost.

Being aware that automation is still very useful, perhaps the question we need to explore is, what doesn’t it solve? Or what are the biggest pitfalls in building test automation itself?

In this talk, I collected my personal experiences focused on the challenges of test automation in global projects. I will share numerous real-life examples causing trouble in terms of testing and ways to cope with them.

Common problems: Some of the greatest common difficulties are: Coping with update/changes Dealing with unstable behaviors Tricky Behaviors which are not easy to automate Hardware in the SUT Testability issues Perception/AI testing Environment Setup Implementation and Maintenance Time consuming executions Reproduction of the issues found during executions

Fundamental solutions: Collaboration with Product team and feature housekeeping Collaboration with Development team and improving testability Locators/Selectors improvement: Usage of test data IDs Usage of various verification approaches Simulation and test harness Non-functional testing Smart automation strategy

Do Bugs Speak?

Experience report 30 min - Suggested by Mesut Durukal

Do bugs speak? Yes, they do. People speak different languages like English, German, French, Chinese etc. But is communication to bugs possible? It is important to understand them, because they really tell us something. There is valuable information underlying the defects of a software, and information mining from defects promises for improvements in terms of quality, time, effort and cost.

Elevator Pitch Revealing bugs is very important to improve the quality. But how about avoiding them in the first place? And how about collecting lessons learnt from the previous bugs? Let’s talk about how we can analyze previously reported issues to improve our future activities.

Problem Definition A comprehensive analysis on all created defects can provide precious insights about the product. For instance; if we notice that a bunch of defects heap together on a feature, we can conclude that the feature should be investigated and cured. Or we can make some observations about the severity or assignee of similar defects. Therefore, there are some potential patterns to be discovered under defects.

Wrap-up Defect analysis is very important for QA people, and especially for QA managers. We utilize lots of aspects to get an idea about the product itself or our procedures. For instance while monitoring defect distribution across testing types, we will discuss how to get an idea about the quality of our testing approach. I.e whether we are applying all types in a balanced way. (functional, performance, documentation, etc.) Or over another graph, in which we track the gap between open defects and resolved defects, we will discuss what action items we can take when the gap widens. Finally, with ML assistance, we will see how we can reduce manual effort and cost.

Results & Conclusion In this session, we discuss data mining from bugs and usage of ML in defect management. Objective of the study is: To present in which ways defects can be analyzed To present how ML can be used to make observations over defects To provide empirical information supporting (b)

100% dekningsgrad med automatisert testing, sa du?

Lightning talk 10 min - Suggested by Gerd Stalheim Wiggen

Med et tankesett og metodikk som dreier seg mot å levere raskere og oftere, må alle flaskehalser fjernes. Teamene har ikke tid til å vente på en tester skal bruke dager på å teste det som er utviklet etter at det er ferdig utviklet. Løsningen er da å skrive automatiserte tester. Kan man da velge å erstatte en tester med automatiserte tester?

Jeg tar utgangspunkt i testpyramiden til Martin Fowler for å se på hva som må vurderes når automatisering innføres. Så ønsker jeg å se på om automatisering virkelig kan erstatte en fysisk tester. Handler testing bare om å få et stort sett med grønne tester før produksjonssetting? Eller handler det om å ha kontroll på kvaliteten av produktet?

Continuous Deployments – Moving an entire organisation to full automatic deployments

Experience report 30 min - Suggested by Tobias Mende

Moving towards continuous deployments with an entire organisation is an interesting and challenging thing to do. It requires not only technical, but also organizational and cultural changes to be a success. At BRYTER, we recently moved from manual deployments twice a week to full automatic and continuous deployments multiple times a day. – And we implemented this change for roughly 15 teams consisting of round about 60 engineers. In this talk, I will share our journey and the learnings along the way.

My crafting project became critical infrastructure

Experience report 30 min - Suggested by Elizabeth Zagroba

Driven to madness by the normal workflow for testing my application, I wrote a small Python script in a couple of days. It called some APIs to build the app and deploy it to a hosted environment. It ran in my terminal, printing output often enough that I wouldn’t get distracted. It solved my immediate problem.

But that wasn’t the only problem it solved. It replaced a manual piece of our release process with an automated step, allowing my team to automate our pipeline. Then other teams copied us. Soon, a dozen teams in three units were trying to add and request features so that my personal pet project could become part of their merge request and release pipelines too. As more ideas needed to urgently serve the needs of teams in release time crunches, I merged code I didn’t agree with in to keep everyone unblocked. The code base became something I dreaded, and I stopped maintaining it.

The next time a merge request came in, I was able to pay it the time and attention it deserved. I worked with the code submitter to improve usability. Another dev forked the code to build a UI component, serving a completely different purpose. Seeing how many individuals and teams used this code reignited my interest in maintaining it. I wrote tests for the repository, allowing me to finally refactor away the changes I’d dreaded. And the next contributor to the code base added a test without being asked. I no longer dread my little Python script. I support and maintain a critical piece of infrastructure, and I’m excited to do it.

Don't settle for a playground

Lightning talk 10 min - Suggested by Einar W. Høst

The software industry seems to have agreed that team autonomy is a good thing, much like agile. Unfortunately, it seems to be doing some strange things with that autonomy - again, much like agile. Instead of creating self-governing and self-sufficient teams with responsibility for business goals and outcomes, we are erecting tiny playgrounds, isolated sandboxes, where the teams focus on best practices for software delivery, safely boarded up by OKRs to provide the illusion that some external leadership is still in control of the whole endeavour, not to worry. Why are we doing this? Why are we repeating the mistake of decoupling our so-called product development teams from the reality of the business? Why do we allow the cord of feedback to be broken? Why do teams still live on a plane of existence separate from where strategic plans and priorities are made, negotiations happen, money is allocated, teams put together? If these vital activities are meant to be outside the realm of influence for the teams, we might as well drop the charade. The autonomy of the playground is by definition insignificant and irrelevant to the business. We don’t need it. We don’t need more theatre.

Briefcase of Performance Testing

Experience report 30 min - Suggested by Varuna Srivastava

In this session, Varuna will share a specific case study of how to strategise e2e performance testing of a product. When we talk about performance testing the first thought always comes to server-side performance testing but there is a lot more to that on strategising performance testing. We often oversee the client-side aspects of performance testing. Even if my search API’s returns in given SLAs but when users search for something in my application it takes ages to return the result. Varuna will focus in this talk to avoid thinking about Performance testing in isolation on the server or client side. So, performing performance testing on these apps becomes very important from Go to market perspective. In this talk, I will share how we started implementing e2e performance testing as part of our delivery cycle.  Session outline What and why of performance testing Types of performance testing Shift left in performance testing Key questions to ask before getting started Client-side vs Server-side performance testing A simple quiz on performance testing Exercise on client-side tool Exercise on server-side tool

Note: This can be run as a workshop for 3 hrs also for 15-20 participants

Fruitful Design Patterns in Test Automation

Workshop 3 hours - Suggested by Varuna Srivastava

Participate in this workshop to learn how to put together the advanced concepts of an API test in a framework that is scalable, robust, easy to read, and eliminates the brittleness in your checks.

This workshop will introduce you to new advanced techniques design patterns and teach you how to break down large, flaky UI tests into quick and simple API tests. You will be given practical hands-on experience on preferred design patterns while designing a framework and on completion of this interactive workshop you will leave with your very own example automation framework that demonstrates advanced principles of api test automation design. We will create a poll to select a language(Java, C#, typescript) in which you preferred to design and use that framework.

Outline/structure of the Session: Basics of rest principle. Api architecture and types of api testing. Do’s and Don’t’s of api testing. Create a framework a. Add e2e functional tests using API’s. b. Introduction to design patterns and error handling. c. Handle the performance testing of API’s. d. Add checks for security threats. 5. Brief on how can a framework be enhanced?

Key takeaways:

  1. A robust and scalable framework with the advanced principle for api testing.
  2. A selection of design patterns for the design framework.
  3. How to design a framework in a manner which covers the function, security and performance of api.
  4. A framework which handles backward compatibility of apis. Technical requirements:
  5. Rest-Assured to employ web services to make your tests quicker and less brittle.

Recovering from technical bankcrupty - ensemble style

Lightning talk 10 min - Suggested by Kjersti Berg

One year ago I started working on a product that was suffering from overwhelming technical debt. Users were not happy, the product was buggy and slow, the team behind the product wasn’t happy, code was not pushed to production out of fear of introducing more bugs, new features could not be developed, so management was not thrilled, in short, the product could be described as being technically bankcrupt.

I believe it is possible to recover from this situation, and I’ve been working together with the team on finding ways to do that.

I’d like to share some of the learnings we’ve made the past year, specifically how valuable it is to do actual work as a group.

  • Working together develops a shared understanding and a shared language
  • Working together helps restore context that is lost
  • Working together reduces the risk involved in changing legacy code with little or poor test coverage

Blazor to the rescue! See a hopeless back-end developer learn some front-end.

Experience report 30 min - Suggested by Maria Zieba

What is the first thing that comes to mind when you hear “frontend”? The answer is probably JavaScript or one of its many frameworks. What about people who prefer C# or just cannot get to be friends with JavaScript? What about those who do not want to spend a reasonable chunk of time learning a new technology?

Blazor to the rescue! Let me show you my journey of building a little Pokédex app, using a programming language I know best, without the dreaded JavaScript or fighting with npm.

Finally, I will try to answer some questions. What makes Blazor such a good choice? What lessons did I learn in the process? How did the framework help me understand not only front-end development better but also some elements of C# that I used to struggle with?

Tune in for some bloopers, too.

Universell Utfordring

Experience report 30 min - Suggested by Terje Tjervaag

Nylig ga jeg meg selv en utfordring: I en måned gjorde jeg skjermen på mobilen min sort og brukte den utelukkende med Nora som min skjermleser-VoiceOver!

Dette er historien om utfordringene jeg støtte på, hvordan Nora og jeg etterhvert lærte oss å bli venner, en litt klein billettkontroll, og den ene kvelden jeg ga litt opp og hva det lærte meg.

Ikke minst skal du få noen gode tips om hva som gjør en app eller nettside lettere tilgjengelig for alle, ettter min erfaring.

11 tips for grønner utvikling av grønnere programmvare

Lightning talk 10 min - Suggested by Kent Inge Fagerland Simonsen

Alle bransjer må bidra til det grønne skiftet. Dette er etterhvert en heller ukontroversiell påstand og vi skulle dermed anta at dette også gjelder for de av oss som lever av å skrive kode. Vi som utviklere av programmvare kan bidra på to måter. Den første, og utvilsomot viktiggste, er å bidra ved å lage programmvare som bidrar til målene i det grønne skiftet. Den andre måten er å bidra til at programmvaren vi lager lages på mest mulig skånsomt vis og at den har minimalt avtrykk. Denne lyntalen handler om den andre måten beskrevet over ved å redusere energibehovet til programmvaren og utviklingen av denne.

Er pull-request reviews for snille?

Lightning talk 10 min - Suggested by Kent Inge Fagerland Simonsen

Pull-request (PR) reviews minner, og er meget mulig inspirert av peer review prosessen til akademiske tidsskrifter og konferanser. Likevel finnes en viktig forskjell: PR reviews er mye snillere. I denne presentasjonen vil jeg illustrere denne forskjellen med eksempler, nøye utvalgt for å illustrere mitt poeng, av både PR review og peer reviews jeg har mottatt opp gjennom årene for så å stille hver av disse stilene opp mot hverandre.

Methodologies between Scylla and Charybdis

Lightning talk 10 min - Suggested by Maja Jaakson

What happens when you dust off your books on Nietzsche and Ancient Greek theatre, read the Wikipedia articles for them instead, and then put together a talk on development practices based on your “scholarly investigation”? Come find out at Maja’s lighthearted talk, where she will irresponsibility muse aloud about high art, sea monsters, Greek gods, and the balance we strike when doing our best dev work.

Artisanal HTTP - or HTTP by hand

Workshop 3 hours - Suggested by Bjørn Einar Bjartnes

In this workshop, we will dig into HTTP 1.1 - the Hypertext Transfer Protocol - from the ground up. We will iterate on the problem of communication between computers, starting by typing text in a terminal on one computer and sending it over TCP/IP to another computer. We will gradually build on this, and before you know it, we are talking to HTTP-savvy programs like browsers. Along the way, we’ll introduce tools such as netcat, curl, jq, wireshark, nginx and k6.

Our goal is that you should understand the capabilities of HTTP better, be able to design solutions that use the capabilities of HTTP to its potential - and have new tools and tricks you can use when troubleshooting.

The format will be highly interactive, so bring a laptop with a Linux terminal of some sort (For example WSL2 with Ubuntu on Windows, a Mac or a real Linux box). If you would like to do the workshop in a pair, bring a friend - or let us know and we can hook you up with someone. For those that prefer to work alone, it is perfectly fine to talk to your own browser on your own laptop, too.

3 Planer For Ditt Neste Programmeringsproblem

Lightning talk 10 min - Suggested by Kent Inge Fagerland Simonsen

Synes du at det er mer enn nok metodikker, strategier, mønstre og inndelinger å holde rede på som utvikler?

Her vil du presentert tre planer å følge når du støter på et programmeringsproblem.

From bricks to circles: learn the onion architecture

Workshop 3 hours - Suggested by Lars Lønne

The layered architecture, with the database on the bottom, is widely used in software development today. I always find testing difficult with this architecture, with all the mocks and stubbing which is necessary to make the system function. Surely, there’s a better way to do it?

As it turns out, there is. In this workshop, we will be exploring the onion architecture. Learn how to isolate the difficult parts, such as database connections and API clients, and make most of your application easily testable with simple, fast unit tests. We will start with an application written in a layered style, and step by step refactor it to the onion architecture. On the way we will go from a few complicated tests, to a large suite of simple and fast tests that covers almost our entire application.

Circular principles for software development

Lightning talk 10 min - Suggested by André Heie Vik

A circular economy is an economy that uses and reuses resources more efficiently. If you’re like me, that probably makes you think of mass production and waste management.

What would circular principles for software development look like? Which resources do we and the systems we make consume, and how can we use and reuse these resources in more efficient ways?

This lightning talk will give you a broader perspective on how what we build can affect the world around us for better or worse, and how we build more the things that make things better.

Reflecting on half a decade of property based testing at Equinor

Experience report 30 min - Suggested by Eivind Jahren

When I joined Equinor 5 years ago, I naively advocated for property based testing (PBT). PBT had been an academic functional programming oddity, but had grown into a tool that was ready for industry. I hadn’t really thought through what PBT would mean for our department, but my experiment lucked out! The hypothesis PBT framework for python was a really good fit for our mostly python codebase. It is easy to understand, developers are excited to try it out, and it uncovers hard to find problems. Perhaps one of the most useful features for us is that it helps debug by minimizing input. This talk is an introduction to PBT and fun stories of bug hunting at Equinor.

What is Machine Learning? All you need to know in 10 minutes

Lightning talk 10 min - Suggested by Oliver Zeigermann

Machine Learning (ML) can be seen as an alternative way of developing software. Following those lines, software developers should benefit from knowing how to do ML. But where to get started? What is important and what can be ignored? Will we need advanced math for that (spoiler: we don’t). This talk will answer those questions and get you started with ML.

The Tao of Software Development

Lightning talk 10 min - Suggested by Oliver Zeigermann

Software development is not merely writing lines of code. It involves a lot of concepts, requires soft skills and a philosophy of coding (e.g. test driven, top-down, bottom-up). Surprisingly, ancient Chinese philosophy has answers to those questions, even technical ones like “why should we do machine learning”. We will also talk about emptiness, final definitions, knowing and not knowing, just to name a few concepts.

Software Architecture is not a swearword

Workshop 1.5 hour - Suggested by Oliver Zeigermann

Within the community of coders “being an architect” or “doing architecture” is often looked down upon. When referring to architecture we often mean drawings of boxes and arrows that are pointless and have not relation to what is really being done.

However, architecture can also be defined as the important decision, whatever they are (https://martinfowler.com/architecture/). There always are important decisions and the code only contains the how and not the why. Such decisions often have to be made early in a project, require compromise and if being bad have the potential to make the project fail.

In this workshop we will look at the important decisions of projects, how to identify and justify then. This is independent of any technology or approach. We will also discuss how to document those decisions, so people actually trust them to be relevant and up to date.

There will be exercises on paper to be solved in teams.

Det filosofiske grunnlaget for programmering - matematikk, filosofi og kategoriteori

Lightning talk 10 min - Suggested by John Grieg

Hvilken sammenheng er det mellom matematikk og verden (den observerbare virkelighet)? Hvilken sammenheng er det mellom matematikk, logikk og de konstruerte modellene av verden? At matematikk og logikk er viktig for IT er nærmest selvsagt, men hva er grunnlaget for matematikk og hva er konsekvensen av ulike valg av filosofisk grunnlag for praktisk arbeid med IT? Mitt syn på matematikk vil bli presentert fra en synsvinkel som vil gi et godt utgangspunkt for refleksjon og ettertanke. Visjoner om AI og kvantedatamaskiner vil kort bli tatt opp.

Take back control of your data with Event Driven Systems

Experience report 30 min - Suggested by Henrik Stene

n this talk we’ll take a deep dive into how we built a new customer database using event driven technologies. We’ll discuss the basic principles of event driven design, and show how the promise of accountability made this a perfect fit for storing customer data.

This talk will contain a lot of real life examples and we will show how we implemented an event driven system using Kotlin and Kafka. We’ll present all the different parts needed to provide a valuable and usable computer system to our customers, developers and customer service representatives.

Missing anything?
Suggest your own talk!