Skip navigation
  • Von: Christian Luda
  • ijug Java Javaland
  • 02/06/2019

Ed Burns: "JavaLand is big enough to bring in international speakers but small enough to interact with them"

We talked to JavaServer Faces co-spec lead Ed Burns about his history and his upcoming keynote at JavaLand 2019.

Ed, on March 21st, you will be a keynote speaker at JavaLand 2019. Can you give a little outlook on what participants can expect?

Absolutely, I'm very excited, I’ve been coming to JavaLand for many years and haven't missed one; I've been there from the beginning. I will be doing a comparative analysis of what makes a programming language platform successful. I have some experience in platform growth and success, having a hand in the development of JavaServer Faces (JSF). One could pretty much call it a platform, although it was essentially a web framework. In a broad sense, anything that you build other software on top of can be considered a platform. In my involvement with the JSF community and technology, I got to have an inside look of what worked and what did not. Knowing what a successful platform is, is helpful when you are choosing a technology just as much as if you're trying to build your own platform. My keynote will look at Java, Go, SWIFT, Node JavaScript, and Python, comparing them from a technical level and also from an ecosystem and business level how they stack up against each other and the choices the stewards of those languages made to get where they are.

As you mentioned, this is not your first time at the conference. What do you like especially about JavaLand?

I'll get at it from two different perspectives. First from the attendees' perspective: It's big enough to bring in international speakers and lots of different perspectives and ideas, but it's small enough and informal enough that attendees get a chance to really talk to and interact with the speakers in a variety of different contexts. You have plenty of opportunities for interaction outside of the session rooms, of course you have the very fun environment of the Phantasialand itself. And the open park night which is great when the weather is good. From a speaker's perspective, anything that DOAG does is top notch, well organized, very professional, and it's always a pleasure to work with DOAG, in the main conference in Nuremberg and the JavaLand event as well.

Will you have some time to explore our country a little bit?

Absolutely. I love coming to Germany. Around the DOAG Conference in November 2018 I spent about a month over there, visiting old friends from the JSF community. So, this time I expect to do the same. I have some good friends I can visit up in Cologne. There's a great club up there called Club Barinton. They have an open Jazz jam on Thursday nights. And after the conference, after I finish my Schulungstag with Oliver Szymanski, I usually shoot up there, bring my horn and have some time to play.

You have a bachelor's degree in computer science with a minor in German. What made you decide to study German?

When I chose my university one of the things that I liked about it was that it didn't require taking a foreign language for engineering students. In my sophomore year, ironically, I attended an information session on the international minor. The university had an Austria exchange program with the Wirtschaftsuniversität Wien and I went to that session and was very struck by the idea. I got the bug and this has been with me ever since. This was 1994. I joined the program, I would spend the summer in Vienna and also half of the summer in Germany. Since then I've just been very much in love with German language and culture.

Your interest in computers was sparked by your passion for the game Tunnels of Doom, for which you're also hosting a fan page. What fascinates you about this game in particular?

I studied this quite a bit when I wrote the book "Secrets of the Rockstar Programmers" with the notion that people of a certain age, people that are now in their mid to late forties and older had a very special benefit of simply growing up and being children in a time when personal computers were first coming out. You could not help but get exposed to it. If you were just a kid then, you probably had an Atari, Commodore 64, or an Apple. And myself, I had a Texas Instruments TI-99/4A. All the people that I talked to in the "Rockstar Programmers" book, Rod Johnson, James Gosling, Nikhil Kothari, all these people have their stories of how they got started and their initial programming platforms. For me it was TI. And this game was great, because it was an early role-playing game, basically a dungeon crawler. You had characters with different attributes: a knight, a wizard, a thief, etc. It was pretty nice. And eventually I wondered, who wrote this game? I tracked the guy down and interviewed him. So, the notion of interviewing programmers about their work is something I have been doing for about twenty years now. This interview I conducted with him was in 2002. And he explained to me the process of programming for TI. He had a couple of different programming choices. The more advanced games were coded straight in assembly. The company itself was trying to build a platform so that developers could code in an easier language with a BASIC type interface, but you couldn't do as much stuff so even back then you had the notion of different platform choices for the hardware and you still see that today. If you look at Java having a co-op with native libraries, there are certain technologies that people use in Java today that still have native language bindings and native dependencies. So, this goes to show that there's always some level of choice for how to get the most from the hardware to the OS to the virtual machine.

At JavaLand, you and Oliver Szymanski are going to do a workshop on Docker and Kubernetes. What are the main advantages of using containers instead of classic virtual machines?

The first one is heterogeneity of course. With virtual machines, most of the enterprise applications that you are dealing with now are using a wide collection of off-the-shelf – mostly open-source – technologies: Prometheus, Grafana, Kibana, all manner of cloud things are distributed primarily as Docker containers which you then orchestrate of course with Kubernetes. So, it's giving you the freedom to take your own code, as part of your business logic and specific enterprise applications, and containerize it, but then you can also include pre-containerized technologies that all the enterprises are using nowadays. So that's the most basic one, but there's also some challenges there with specifically using Java and containers together because the two technologies evolved at very different times. Containers emerged in the last six years and Java Virtual Machines in the mid-nineties basically. There are some cares you have to take to make those two work together largely in terms of allowing the metrics and telemetry and memory management parameters that tune the VM to be in synch with what the Docker container and most importantly Kubernetes environment is doing. So, you avoid having problems where the VM gets unexpectedly killed because it's running out of heap space and keeps getting restarted by Kubernetes when the application itself is just doing what it normally does. So, if you don't take care of passing the parameters through correctly then you can really be scratching your head and not knowing what to do.

With the introduction of Docker containerization has become widely used. What sets Docker apart from earlier container tools?

I'm very of proud of having worked at Sun for as long as I did because in many ways it was a very engineering driven company. With one of the earlier containerization things we just set out to find hard problems and solve them. And the business side was just trying to find a way to monetarize and leverage that while keeping the engineering integrity up high. So, for many things you could say: Look at what we have now – we have X (when say X I don’t mean The X Window System of course Sun had that too) you can say for technology, X, look at something that Sun had a long time ago, it's already there. For containerization there was Solaris Zones, for cloud computing itself there was what we called grid computing which we had in 2004. So, coming back to the question of what sets Docker apart, and this what I talk about in the keynote, is a "right place and right time" thing. If you have an idea that's really a great idea but the other things around it have not come into place yet this won't take off. So, in the case of Docker I really do believe it was the ubiquity of GNU/Linux that was everywhere which Google basically helped establish, and then the rise of cloud platforms that needed a common application storage mechanism which Docker provided – one of the bigger value adds of Docker is the whole Dockerfile language where you can describe how your containers are built. Another really important feature is the popularity of the Dockerhub itself. It's one of those things where they took an idea and it was not a novel new idea but they did it really well and coupled it with a very good marketing effort and getting people out there on the conference trail – DevRel was an important thing for them – and knowing what conferences to hit, and just making sure it didn't suck.

Among the supporters of Docker are big companies like Microsoft, Red Hat, IBM, Cisco, and Google. How important has this support been for the success of Docker?

That's vital as well. Initially those names you mentioned were kind of skeptical and waiting to see if the technology would take off – however in addition to the efforts already mentioned Docker was trying to be a part of an open standard so there was the Open Container Initiative (OCI) which is part of the Linux Foundation where the other competing container technologies, most notably rkt, sign along and agree to comply with the OCI standard for the file system, the container itself, while they do differ in terms of runtime environments and what kind of virtualization technology is used, they invested in having some kind of standard effort. So that enabled the big players you mentioned to feel safe getting on board. This is something I learned early in my career: it's a trade-off between a company trying to monetize a proprietary technology and another company trying to get other people to use it without having them be too upset about having to join in.

What makes Kubernetes so essential for orchestrating container tools such as Docker?

The Kubernetes project started at Google and the people that initially did it – Brendan Burns, one of the guys I talked to in the Rockstar Programmers effort, and his colleague – whose name I don't remember unfortunately. Brendan Burns left Google and went over to Microsoft where he's promoting Kubernetes use there, so that's one thing that happens. Microsoft says, hey, this thing is taking off, let me hire the main guy, bring him in, build Azure and more importantly of course our actual product portfolio. So, there was a need to have container orchestration that allowed you to basically encode your entire IT operation, from the business logic that's solving your business problems to the runtime environments that are allowing the software to run, to the DevOps portion that's providing the telemetry and allowing the systems to be monitored and dealt with appropriately as need and demand goes up and down. There was a need for that and there were a few different ones out there, basically Mesosphere and Kubernetes were two of the big names but the same thing: significant investment was made by the Kubernetes project and much credit to Google for allowing them to take the open road and focus on some of the things I talk about in my keynote: making sure there is a strong community, making sure that there is good governance, and of course the core technology itself has to solve the problems that people have. So, I think that's what the secret sauce was for Kubernetes to be successful.

What new possibilities come with Docker and Kubernetes?

Well, unfortunately for some of the more traditional on-premises big license kind of things it enables companies to really take their existing stack and migrate it over. There are different levels of what they call "lift and shift" but you can basically sort of encode your entire business operation to the point where now you can put it on a commodity cloud environment and achieve some cost-savings. I think that's it, from what I can see it's mainly cost-savings play for the data center operations. Now, I have a kind of specific perspective, my history being inside with WebLogic and inside the on-premises legacy, monolith as some derisively call it, I think the new possibilities are more flexibility, more agility. One of the big things people talk about is, well, I can roll out a new version of my stack several times a day whereas before it would take much longer, months even to do an upgrade to the monolith. As a developer I would like to see the fastest way that I can change the code, compile it, deploy it and see that change take effect. But I would like to do it in a safe way that doesn't jeopardize anything that happened before, so some of the other maturity factors that you have to enable this are: really awesome continuous integration, continuous delivery practices. So, it wouldn't have the cloud-based development and deployment model without that safety net that good continuous integration and delivery practices give you. So, I would say cost-savings and more agility are the two big ones.

A concern that comes with containers is security. A popular solution is SELinux which is based on the FLASK concept by the NSA – a fact that might be frightening for some users. What's your take on this issue?

When you attempt to use Docker in your production environment you have to recognize that you don't get that for free, you have to pay attention to every layer of the stack that you are using. So, for example, if you are using the base Alpine Linux image for your Docker image – because Docker has an inheritance type thing, when you define your Docker images you can say take this existing image out there, Alpine Linux distribution for example, and put a bunch of extra stuff on top of it. You are therefore inheriting everything that Alpine Linux already has. It's not safe to just say give me that, you have to go and look at the Docker hub and see what it has, and Docker hub has made it a little easier to check the safety of that, they have this color-coded chart of giving an image, the existing CVE vulnerability database and the known patches to those vulnerabilities. Does this image have all of the latest patches? You can actually see which of the ones are out there, and it gives you a very clear, easy-to-understand vulnerability for each image. So, you just have to check that out depending on your base image and see if it has what it needs. And if you don’t like the SELinux FLASK thing then you have a few different choices, you could say, well, look at that Docker image and what does it depend on, and then instead of depending on a SELinux one you can depend on a lower level foundation one and build it up yourself. So, if you have an image published Docker hub, you can see exactly what's in it, and you can even see the Dockerfile that was used to create that image. And that's one of the things that Docker did that contributed to their success: transparency instead of being opaque. Because they could afford to do that, they were not a systems vendor, they didn't have their own operating system that they were trying to push, they weren't trying to sell their own hardware like a lot of other vendors were at the time. So, the business model lined up with enabling success in that way.

As a member of the JCP program you often stress the importance of transparency. Why is it so important to you?

Well, it's an acknowledgement of how much open-source has won the day. Now, this is an interesting debate, and you get into free speech and liberty because you talk about the difference between free software and open-source software, and that's a very fun topic to talk about at JavaLand during the community night, especially if you have a few nice beers – I love going to JavaLand because you get some very good Kölsch, and as much of it as you like, and I'm very fond of Kölsch. But to come back to the question of transparency, enterprises have decided that open-source is a very useful thing, and that goes hand in hand with transparency. Now the nice thing about JCP is that it's not as wild and crazy as just plain old open-source. It does have a process, it allows corporations who are a little more conservative to have a hand in things. And, also, it stresses a very important notion of clean IP, intellectual property that is, and the open-source cloud type things have cottoned on to this, they realize the importance of this. CNCF and Eclipse as well, they now know, hey, for a corporation to deal with and touch the technology the IP has to be clean, and everything has to be either directly indemnified or have a pointer to something that is also directly indemnified. So, I think transparency is important for that reason.

You have already published four books. Are there already plans for a new one? Maybe another Rockstar Programmers?

One of the things I have been doing since the Rockstar Programmers is turning it into a podcast. I have done a few efforts with the OffHeap Podcast where we go through the existing interviews and excerpt audio, and I’ve done a few other interviews as well with newer people, Bryan Cantrill from Node.js, and I mentioned Brendan Burns from Kubernetes. So, the funny thing about writing books though is that it’s definitely a labor of love for me, it’s very hard to find the time to put those all together because each of the four books I wrote took about 18 months to do, and talking to other authors, most of them say it takes about that long. And another factor is, the way people learn things that were used to be encoded in books are now with courseware such as Coursera, Pluralsight, or Safari. So, the successful authors that I have known who made a profession out of doing books have transitioned into being sort of virtual professors and they crank out courseware instead. So, I think if I was to write another technical book it wouldn’t be in the sense of another version of a 300-page complete reference on something but more signing up with an online course provider and doing that.

Throughout your career you have worked on various projects. Is there one career highlight thus far you can name?

Absolutely, I would have to say that it was working on JSF because it tied together so many things that I love. Building communities and the blessing of having JSF become popular in German speaking Europe with the long partnership I had with Irian which is based in Vienna, and they were on the JCP, they helped develop JSF, they had a conference series that was running in Vienna every year and they continued to do a lot with JSF and Java EE technologies. And then, getting a chance to see the value system that I have of respect and participation being fused in the community itself is something I really was happy to do. So, that’s it. Working on JSF was rewarding and fun in that regard.

Thanks for your time, Ed.