I have been in Mountain View, CA for 2 weeks now. The internship at Google has been going well. The basic time line has been as follows:
- I arrived in Mountain View on May 31st.
- My first day at Google was Jun 2nd.
- They did “on-boarding” for most of that week. Which was kinda painful at times, but I did learn a lot about how Google works.
- On Friday I finally got a chance to start to do a little real work and I started to figure out what project I would be doing. More on that below.
- The next week (last week) I had only one class I had to attend so I finally started to get real amounts of work done.
I’ve been meeting lots of good people both at Google and from my housemate Dave’s social group. There are board games every Tuesday night and dinner and movies every Saturday, so I have not been suffering from lack of socialization which has been nice (it keeps Arthur from getting even more crazy). Yesterday I went on a hike with a bunch of Google interns. It was fun though the group was a bit too big and for some reason everyone wanted to walk really fast, so I ended up at the very back keeping track of the stragglers and taking pictures. The stragglers where actually a really fun group of people it turned out.
The hike was to the beach at Reyes Point (click the link to see a map of our route). It was a really nice hike (though longer than I expected). I got some good pictures and even a couple panoramas. I also met several really good people and chatted about all sort of nerdy things (these are Google interns after all).
My Google project is related to type checking Python code. I probably shouldn’t go into too much detail online since I don’t know what is secret and what isn’t, but I know I can tell you about the code that is open-source already. I have been working with PyTypeDecl and a metacyclic Python bytecode interpreter called Byterun. It’s been quite interesting and I think the I will be proud of the results.
So overall California is treating me well. Tomorrow I will be back to work and trying to push forward with my plan of action. And trying not to eat too many snacks. Oh, that reminds me. As some of you I’m sure know, Google has free food all over the place. They have cafes all over campus that are all free (for Googlers), and they actually produce very good food. I mean not everything is great, but it’s better than most of the $5 lunches I could get around UT. Also they have kitchenettes on each floor which have a lot of snacks and tea and stuff like that. I guess their theory is a well fed programmer is a productive programmer. I don’t mind being pampered a bit, but the chocolate they have is good enough to be really dangerous. ;-)
Posted in
Grad School,
Life (other than code),
Programming Theory,
Travel
So for those of you who don’t know, I am planning on going to grad school for Computer Programming Languages. I have been fully accepted to Portland State University in Portland, OR. Also I was concerned that they would make me take undergrad classes because I don’t have an Undergrad CS degree, but it doesn’t look like that will be a problem. I sent them a list of theoretical CS things I have knowledge of and I think they were impressed. The professor I am communicating with said “I believe you are exceptionally well prepared for our MS program.” Which I feel really good about.
I will be moving out to Portland, OR in mid to late August. But I will probably be going to KS for a couple weeks on the way, so I will probably be leaving NYC around the end of July. It’s so soon. I’m scared and excited.
Here is the theoretical computer science bragging document I sent them. It looks pretty impressive and it’s all true! ;-)
These are the buzz-words that I know and understand and that I
think would be applicable. I also have a more complete write up below.
* Turing machines
* The halting problem
* Gödel’s Incompleteness Theorem
* Bog-O Notation
* Lambda Calculus
* Church Numeral
* Type-level meta programming
* Kernel development
* System integration
* System programming
* Compilation, assembly and linking as separate steps
* Embedded systems programming (without an OS and with a minimal OS)
* Direct I/O handling
* Interrupt programming
* Object-oriented programming
* Pure and impure functional programming
* Functional-OO-Hybrid programming
* Logic programming
* Functional Logic programming
* JOIN-Calculus and Pi-calculus
* The Actor Model
* Software Transactional Memory
I have learned about Turing machines and the halting problem and
Gödel’s Incompleteness Theorem. I have also learned about Lambda
Calculus and Pi-Calculus (to gain a better understanding of languages
based on them). I understand and use the Big-O notation of algorithmic
complexity (and its limitations). I have also dealt with Church
Numerals for the purpose of encoding numbers in Lambda Calculus and in
the type systems of other languages (such as Scala) to allow for type
level metaprogramming.
I have been using Linux for more than 10 years now. Over that time I
have done everything from desktop usage to custom turnkey system
integration and kernel development. I have worked on most parts of an
OS from the kernel to application software. I have also worked on
systems with both no OS (PIC microcontrollers) and a very small OS (TI
calculators), so I have done direct I/O and interrupt programming
(including integration with standardized protocols like USB HID). Also
I have dealt with compilation, assembly and linking as separate steps.
I have learned the new languages on a regular basis not because I need
to program in the languages but because the new languages teach me
things about programming and programming languages in general. For
instance I learn about pure functional programming from Haskell and
about hybrid-OO-functional from Nemerle (and later Scala). Also I
understand monads as both collections, processes and pseudo-containers
like the IO monad (though not the category theory from which they
derive, I must admit). I also spent some time working with
JOIN-calculus (Jocaml) and the actor model (Erland and Scala).
Recently I have been reading about Software Transactional Memory and
how it can be integrated with Actors and how transactions can follow
messages from actor to actor. I have read a lot about JOIN-Calculus
and Pi-Calculus because I am very interested in natively parallel
programming abstractions. I think natively parallel abstractions of
computation will be very important in the future of massively
multi-core computers.
I forgot to mention Object-Oriented programming above because I have
been using it for so long (I first learn OO with C++ when I was around
14). I have also attempted to learn a few logic languages (Mercury and
Prolog and also Curry from PSU), but I admit I have had very little
luck. It has turned out to be the hardest kind of language for me to
grasp. I have a basic understanding of how the languages work and I
know what the syntax means and how variable binding works and all that
and how all these pieces should work together to allow arbitrary
programs. I also read an article Sergio Antoy, et al about
multi-threaded functional logic language implementations. It was very
interesting.
Posted in
Grad School,
Programming Theory
I just found a Java “got’cha” (there certainly are plenty aren’t there?).
When a Java object is constructed the super class constructor is called first. So far this makes sense. But this also applies to initializations that are written at the point of declaration like:
Object obj = null;
This is different then leaving out the = null
in that obj
is nulled after the superclass constructor runs. This is usually not a problem, but I have a situation where some of the initialization occurs in a method called from the superclass constructor (this is a dubiously good design I know). The result is that if the initialization method called from the superclass sets obj
that change will be overwritten with null
in the subclass constructor.
It took me several hour to find this problem. I’m going to be much more careful with calls to virtual functions in the constructor. It is considered bad form for a reason. Also this is one of the few times I have started to understand why the functional programming fan boys bash on object oriented programming. The OO methodology is flawed in a number of ways. That being said I think there is a place for it. It models the real world of things that can interact in a very nice way. I think that the actor model may turn out to be better, but that will take time.
Posted in
Java,
Programming Theory
One of the common complaints that people have about Java is that it lacks the const keyword. Yes, Java has final, but final cannot create a reference to a immutable object only a immutable reference to a mutable object.
The issue is that there is no way to encode the promise that an object will not be mutated in the Java language. The work around for this is to create a wrapper object that throws an exceptions if a mutator method is called. This is how the collection API handles the problem. However this technique provides no compile time checks. There is no way to know at compile time if a given Collection is mutable or immutable.
I have thought about this a bit lately and I have realized that const as it is implemented in C++ is fundamentally a way of defining a sub-interface of the interface to a class. I mean this in the sense that the interface of “const Thing” is a subset of the interface of “Thing”. So in essence “const Thing” can be thought of as a abstract base class of “Thing” (with the exception that casting “const Thing” to “Thing” should be hard or impossible). What I’m getting at is that you could generalize this idea into the idea of a interface subset.
In Java this could be implemented by hand as follows:
public interface ConstThing {
public String getValue();
}
public class Thing implements ConstThing {
private String value;
public String getValue() {
return value;
}
public void setValue(String value) {
this.value = value;
}
}
public class ConstThingProxy implements ConstThing {
final private Thing subject;
public ConstThingProxy(Thing subject) {
this.subject = subject;
}
public String getValue() {
return subject.getValue();
}
}
// Usage:
public class Client {
public void useMutable(Thing t) {
t.setValue("New");
}
public void useConst(ConstThing t) {
t.getValue();
t.setValue("New"); // <-- Compiler Error
}
public void useConstCast(ConstThing t) {
t.getValue();
((Thing) t).setValue("New");
/*
* No compiler error, but there would be a runtime error if the code
* providing the ConstThing wrapped the reference in a
* ConstThingProxy.
*/
}
}
Obviously this has a number of missing features and is hard to do because the interface must be declared by hand. So the main missing features from this approach are (in rough order of importance):
- There should be an easy way to create this structure of classes. In other words, there should be syntactic sugar for it or something like that.
- There should be a way to make the this reference in a function be an instance of all of the interface subsets that the method it part of. This would allow the compiler to provide warnings or errors if the method references elements of the class that are not part of the interface subset. This is actually a very complex problem, especially when combined with inheritance. I don’t have a solution for it that I am comfortable with.
I may come back to this subject later. For instance I think it would be interesting to see if it can be implemented better in Scala with it’s more expressive type system.
Posted in
Java,
Programming Theory