ecoop08 22nd European Conference on Object-Oriented Programming
July 7th - 11th 2008, Paphos, Cyprus

 

 

INAUGURAL ECOOP SUMMER SCHOOL

ECOOP 2008 offers 7 summer school sessions on the 9-11th of July, in parallel with the technical paper sessions of the conference. Attendance at the summer school sessions is included with registration to the main conference. The summer school sessions will be offered on a first-come, first-served basis: if you wish to attend a particular session, make sure you are registered for the full conference, and get to the summer school room early!


X10: Concurrent Object-Oriented Programming for Modern Architectures

Vijay Saraswat, Igor Peshansky, Nathaniel Nystrom, IBM Research

Two major trends are converging to reshape the landscape of concurrent object-oriented programming languages. First, trends in modern architectures (multi-core, accelerators, high performance clusters such as the Blue Gene) are making concurrency and distribution inescapable for large classes of OO programmers. Second, experience with first-generation concurrent OO languages (e.g. java threads and synchronization) have revealed several drawbacks of unstructured threads with lock-based synchronization. X10 is a second generation OO language designed to address both programmer productivity and parallel performance for modern architectures. It realizes the Asynchronous Partitioned Global Address Space (APGAS) model for Java-like language. Developed over the last ten years, PGAS offers to the programmer a shared address space split across multiple operating system processes; it underlies languages such as UPC, Titanium and Co-Array Fortran. APGAS extends the model by supporting explicit lightweight concurrency with attendant mechanisms for termination detection and atomicity. The tutorial illustrates how common design patterns for concurrency and distribution can be naturally expressed in X10. It shows design patterns for establishing that programs are determinate and deadlock-free. It also demonstrates -- with examples drawn from High-Performance Computing -- how one can achieve run-times comparable to hand-written C.

Vijay Saraswat's main interests are in programming languages, constraints, logic and concurrency. His recent papers include work on weak memory models, linearity and persistence in the pi-calculus, dependent type-systems for Java-like languages, and timed concurrent constraint programming.

Igor Peshansky is a Senior Research Software Engineer at IBM TJ Watson Research Center. He leads the X10 high-performance compiler backend team. His research interests include programming language design, program analysis and source-level annotations.

Nathaniel Nystrom is a post-doctoral researcher at IBM TJ Watson Research Center. His research interests include programming languages, compilers, tools and methodologies for constructing safe, secure and efficient systems. He has done work on software extensibility, language-based security, programming languages, run-time systems, and compiler optimizations.


Using JavaCOP for Type Systems Research

Shane Markstrum, Dan Marino, Todd Millstein, UCLA

Many researchers have proposed extensions to the type systems of Java and related languages, in order to statically enforce new kinds of properties and constraints. This tutorial demonstrates how the JavaCOP framework for pluggable type systems in Java can be used to facilitate such research. JavaCOP allows users to easily define new type annotations using Java's metadata facility, along with declarative rules that define the associated constraints. JavaCOP also includes support for incorporating dataflow information into rules and for testing pluggable type systems. Through several examples, the tutorial will illustrate how JavaCOP facilitates rapid implementation of type system extensions, easy experimentation with alternative designs, and practical validation of the resulting type system.

Shane Markstrum is a Ph.D. candidate in Computer Science at the University of California, Los Angeles (UCLA). Shane received his B.S. from Harvey Mudd College and his M.S. from UCLA.

Daniel Marino is a Ph.D. student at UCLA working with advisor Todd Millstein in the area of type systems and programming languages.

Todd Millstein is an Assistant Professor in the Computer Science Department at UCLA. Todd received his A.B. from Brown University and his M.S. and Ph.D. from the University of Washington.


Teaching and Doing Formal Language Theory with the SASyLF Proof Assistant

Jonathan Aldrich, CMU

Teaching and doing formal programming language theory is hard. It's easy to make mistakes and hard to find them. Proof assistants can help check proofs, but their learning curve is too high to use in most classes (and is a barrier to researchers too). In this tutorial we present SASyLF, a LF-based proof assistant specialized to checking theorems about programming languages and logics. SASyLF has a simple design philosophy: languages, their semantics, and their meta-theory should be written as close as possible to the way it is done on paper. We will show how to use SASyLF to formalize languages and their semantics, and how to prove meta-theorems about them. We will share our experience using SASyLF in a Carnegie Mellon University course in Spring 2008. After the tutorial, attendees should be comfortable enough with the tool to prove most theorems taught in a graduate-level type theory course that emphasizes objects. The tutorial will be hands-on, so bring a laptop that can run Java 1.5 (or share with a friend).

Jonathan Aldrich is Assistant Professor in the School of Computer Science at Carnegie Mellon University. Aldrich's research contributions include techniques for verifying object and component interaction protocols, modular reasoning techniques for aspects and stateful programs, and new object-oriented language models. For his work on verifying software architecture, Aldrich received a 2006 NSF CAREER award and the 2007 Dahl-Nygaard Junior Prize, given annually for a significant technical contribution to object-oriented programming. Aldrich developed SASyLF based on his personal experience both with the challenges of teaching formal language theory, and with the difficulty of learning existing proof assistants.http://www.cs.cmu.edu/~aldrich/SASyLF/


Data Parallelism in Ct

Gansha Wu, Xin Zhou, and Neal Glew, Intel

Parallelism is going mainstream. Many chip manufacturers are turning to multicore processor designs rather than scalar-oriented frequency increases as a way to get performance in their desktop, enterprise, and mobile processors. This endeavour is not likely to succeed long term if mainstream applications cannot be parallelized to take advantage of tens and eventually hundreds of hardware threads. Data parallelism - where large data structures such as vectors drive the creation of threads - is one way to parallelize applications that has been particularly successful in a number of application areas including image processing, graphics, scientific computing, and finance. This tutorial will introduce the data parallelism programming model, and show a number of applications of it to these areas. The ideas will be presented concretely in Ct, a data parallelism library for C++ developed at Intel.

Gansha Wu, Xin Zhou, and Neal Glew are researchers at Intel's Corporate Technology Group. Gansha has been with Intel for 7 years and leads a team researching advanced compiler and runtime technology for future Intel architectures. Xin has been with Intel for 5 years and leads Ct programmability and Ct workloads study and analysis. Neal has been with Intel for 6 years and leads a project implementing a parallel functional programming language.


A Short Introduction to Newspeak

Gilad Bracha, Cadence Design Systems

Newspeak is a new programming language, descended from Smalltalk. Like Self, Newspeak is a message based language: all computation - even an object's own access to its internal structure - is performed by sending messages to objects. However, like Smalltalk, Newspeak is class-based. Classes can be nested arbitrarily, as in Beta. Since all names denote message sends, all classes are virtual; in particular, superclasses are virtual, so all classes act as mixins. There is no static state in Newspeak. Instead, top level classes act as module definitions, which are independent, immutable, self-contained parametric namespaces. They can be instantiated into modules which may be stateful and mutually recursive. Newspeak is an object-capability language, providing a sound foundation for security. Each Newspeak module runs in its own sandbox, and module code is always re-entrant. Naturally, like its predecessors, Newspeak is reflective: a mirror library allows structured access to the program meta-level, in a manner consistent with object-capability based security. We discuss Newspeak's suitability for problems such as domain specific languages and network-based software distribution, and illustrate Newspeak's features through substantial examples.

Gilad Bracha is a Distinguished Engineer at Cadence Design Systems. Previously, he was a Computational Theologist and Distinguished Engineer at Sun Microsystems. He is co-author of the Java Language Specification, and a researcher in the area of object-oriented programming languages. Prior to joining Sun, he worked on Strongtalk, the Animorphic Smalltalk System. He received his B.Sc in Mathematics and Computer Science from Ben Gurion University in Israel and a Ph.D. in Computer Science from the University of Utah.


Making the Future Safe for the Multicore Era: Semantics, Analysis, and Implementation

Suresh Jagannathan, Purdue

Programmability is a key hurdle towards the effective use of emerging multicore and manycore architectures. This short tutorial will discuss how to leverage (a) the analytical capability of compilers, (b) support for speculation available in concurrent language runtime systems, and (c) a programmer's domain-specific knowledge to help programmers effectively utilize the computing capability these new architectures afford. The tutorial will cover the semantics, analyses, and implementation of a speculative execution mechanism called {futures} that shift the burden of effective programmability of multicore systems away from the programmer onto the compiler and runtime system. Framed in the context of Java, we will discuss its semantics, formalize desired safety properties, and present both compile-time analyses and runtime techniques that can be used to enforce correctness. Because our techniques enforce deterministic execution guarantees, safe futures facilitate a seamless migration path for sequential Java programs to multicore environments.

Suresh Jagannathan is a Professor of Computer Science and a University Faculty Scholar at Purdue University. Prior to joining Purdue, he was a Senior Research Scientist at the NEC Research Institute. His interests are in programming language design and implementation, distributed systems, and software engineering. He is especially interested in new language mechanisms and their associated implementation for safely exploiting concurrency on scalable multicore and manycore platforms. He received his BS from SUNY Stony Brook, and his MS and PhD from MIT.


Declarative Object-Oriented Language Implementation using JastAdd

Torbjörn Ekman, Oxford
Görel Hedin, Lund

JastAdd is a system for generating language tools such as compilers, source code analyzers, and language-sensitive editor support. Its specification language is object-oriented and has many similarities with Java, but it is also declarative, allowing computations to be defined without giving an explicit evaluation order. This declarativeness allows languages and tools to be implemented as composable extensible modules, as exemplified by JastAddJ, an extensible Java compiler. JastAddJ is itself built as a set of small composable modules, and example extensions include the implementation of non-null type checking , object-oriented metrics, and flow analysis. Furthermore, JastAddJ is used in the latest version of the Aspect Bench Compiler for AspectJ. In this short tutorial we give an introduction to JastAdd and its underlying mechanisms. These are general ideas that can be used to implement any kind of language. We also demonstrate how to add new language constructs and analyses to Java by extending JastAddJ. The intended target audience is primarily researchers that want to easily implement new language constructs and/or source code analysis tools. Both the JastAdd system and the extensible Java compiler JastAddJ are available as open-source tools at http://jastadd.org.

Torbjörn Ekman is a Research Fellow at the Computing Laboratory at University of Oxford, UK. His research interests include extensible compilers, scriptable refactorings, domain-specific languages, and aspect oriented programming. He can be reached at torbjorn@comlab.ox.ac.uk.

Görel Hedin is an associate professor at Lund University, Sweden. Her research interests include object-oriented languages and design, compilers and language tools, and agile methodologies. She can be reached at gorel@cs.lth.se.