Icon Unrolling Rotations

 

Icon Animation Blend Spaces without Triangulation

 

Icon Quaternion Weighted Average

 

Icon BVHView

 

Icon Dead Blending Node in Unreal Engine

 

Icon Propagating Velocities through Animation Systems

 

Icon Cubic Interpolation of Quaternions

 

Icon Dead Blending

 

Icon Perfect Tracking with Springs

 

Icon Creating Looping Animations from Motion Capture

 

Icon My Favourite Things

 

Icon Inertialization Transition Cost

 

Icon Scalar Velocity

 

Icon Tags, Ranges and Masks

 

Icon Fitting Code Driven Displacement

 

Icon atoi and Trillions of Whales

 

Icon SuperTrack: Motion Tracking for Physically Simulated Characters using Supervised Learning

 

Icon Joint Limits

 

Icon Code vs Data Driven Displacement

 

Icon Exponential Map, Angle Axis, and Angular Velocity

 

Icon Encoding Events for Neural Networks

 

Icon Visualizing Rotation Spaces

 

Icon Spring-It-On: The Game Developer's Spring-Roll-Call

 

Icon Interviewing Advice from the Other Side of the Table

 

Icon Saguaro

 

Icon Learned Motion Matching

 

Icon Why Can't I Reproduce Their Results?

 

Icon Latinendian vs Arabendian

 

Icon Machine Learning, Kolmogorov Complexity, and Squishy Bunnies

 

Icon Subspace Neural Physics: Fast Data-Driven Interactive Simulation

 

Icon Software for Rent

 

Icon Naraleian Caterpillars

 

Icon The Scientific Method is a Virus

 

Icon Local Minima, Saddle Points, and Plateaus

 

Icon Robust Solving of Optical Motion Capture Data by Denoising

 

Icon Simple Concurrency in Python

 

Icon The Software Thief

 

Icon ASCII : A Love Letter

 

Icon My Neural Network isn't working! What should I do?

 

Icon Phase-Functioned Neural Networks for Character Control

 

Icon 17 Line Markov Chain

 

Icon 14 Character Random Number Generator

 

Icon Simple Two Joint IK

 

Icon Generating Icons with Pixel Sorting

 

Icon Neural Network Ambient Occlusion

 

Icon Three Short Stories about the East Coast Main Line

 

Icon The New Alphabet

 

Icon "The Color Munifni Exists"

 

Icon A Deep Learning Framework For Character Motion Synthesis and Editing

 

Icon The Halting Problem and The Moral Arbitrator

 

Icon The Witness

 

Icon Four Seasons Crisp Omelette

 

Icon At the Bottom of the Elevator

 

Icon Tracing Functions in Python

 

Icon Still Things and Moving Things

 

Icon water.cpp

 

Icon Making Poetry in Piet

 

Icon Learning Motion Manifolds with Convolutional Autoencoders

 

Icon Learning an Inverse Rig Mapping for Character Animation

 

Icon Infinity Doesn't Exist

 

Icon Polyconf

 

Icon Raleigh

 

Icon The Skagerrak

 

Icon Printing a Stack Trace with MinGW

 

Icon The Border Pines

 

Icon You could have invented Parser Combinators

 

Icon Ready for the Fight

 

Icon Earthbound

 

Icon Turing Drawings

 

Icon Lost Child Announcement

 

Icon Shelter

 

Icon Data Science, how hard can it be?

 

Icon Denki Furo

 

Icon In Defence of the Unitype

 

Icon Maya Velocity Node

 

Icon Sandy Denny

 

Icon What type of Machine is the C Preprocessor?

 

Icon Which AI is more human?

 

Icon Gone Home

 

Icon Thoughts on Japan

 

Icon Can Computers Think?

 

Icon Counting Sheep & Infinity

 

Icon How Nature Builds Computers

 

Icon Painkillers

 

Icon Correct Box Sphere Intersection

 

Icon Avoiding Shader Conditionals

 

Icon Writing Portable OpenGL

 

Icon The Only Cable Car in Ireland

 

Icon Is the C Preprocessor Turing Complete?

 

Icon The aesthetics of code

 

Icon Issues with SDL on iOS and Android

 

Icon How I learned to stop worrying and love statistics

 

Icon PyMark

 

Icon AutoC Tools

 

Icon Scripting xNormal with Python

 

Icon Six Myths About Ray Tracing

 

Icon The Web Giants Will Fall

 

Icon PyAutoC

 

Icon The Pirate Song

 

Icon Dear Esther

 

Icon Unsharp Anti Aliasing

 

Icon The First Boy

 

Icon Parallel programming isn't hard, optimisation is.

 

Icon Skyrim

 

Icon Recognizing a language is solving a problem

 

Icon Could an animal learn to program?

 

Icon RAGE

 

Icon Pure Depth SSAO

 

Icon Synchronized in Python

 

Icon 3d Printing

 

Icon Real Time Graphics is Virtual Reality

 

Icon Painting Style Renderer

 

Icon A very hard problem

 

Icon Indie Development vs Modding

 

Icon Corange

 

Icon 3ds Max PLY Exporter

 

Icon A Case for the Technical Artist

 

Icon Enums

 

Icon Scorpions have won evolution

 

Icon Dirt and Ashes

 

Icon Lazy Python

 

Icon Subdivision Modelling

 

Icon The Owl

 

Icon Mouse Traps

 

Icon Updated Art Reel

 

Icon Tech Reel

 

Icon Graphics Aren't the Enemy

 

Icon On Being A Games Artist

 

Icon The Bluebird

 

Icon Everything2

 

Icon Duck Engine

 

Icon Boarding Preview

 

Icon Sailing Preview

 

Icon Exodus Village Flyover

 

Icon Art Reel

 

Icon LOL I DREW THIS DRAGON

 

Icon One Cat Just Leads To Another

In Defence of the Unitype

Created on April 23, 2014, 3:47 p.m.

Robert Harper says that dynamically typed languages are actually statically typed languages but with a single type - the unitype. This type is a union of all of the runtime types, and so contains all of their combined values. Because of this Harper says that statically typed languages are a superior superset of dynamically typed ones. In them one could use a union type to create this unitype, and program in it as if it were a dynamic language. But of course no one does this, as it would be crazy, and therefore statically typed languages must be better.

This argument is basically correct in its assertions. Looking at dynamically typed languages as unityped gives a nice insight into how they work, and one really could program in a statically typed language as if it were a dynamic language fairly comfortably. But the reason language designers still create dynamic languages isn't because they are too stupid, stubborn, or trendy to allow for multiple types. And it certainly isn't because "dynamic" sounds cool, while "static" sounds lame (is this primary school?).

To avoid conflict on this issue I will now refer to statically typed programming languages with compile time checking (such as Java or ML) as polytyped languages, and to dynamically typed programming languages with run time checking (such as Python and Javascript) as unityped languages.

For better or worse, there are real differences between unityped and polytyped languages. Here are some of them.



Polytypes require static scope

An often forgotten aspect of polytyping is that it relies on static scope. One aspect of static scope is that just from looking at the source code one can infer, at any point, which variables are defined and which are not, including their types.

Unityped languages such as Python or Ruby typically don't support static scope. In these languages variables are stored in a dictionary of local values, which can be modified in numerous different ways at run time. Static scope would require changes to these languages at their core. Harper might laugh off any dynamically scoped language as being useless, but this at least explains why polytypes couldn't just be added to such languages as some people propose.

In fact even when intended, static scope can be difficult to add. Early Lisps were designed with the possibility of static scope in mind, but many dialects now such as Common Lisp and EmacsLisp don't support it fully due to complex and tricky edge case.



Polytypes require unified models

Polytyped languages have two systems the programmer must create a mental model of in their head. The first is the computation model. This is what happens when code is executed. The second is the type model. This is how the types in the language are used and manipulated by the compiler. In a unityped system there is no type model, and the computation model is the only thing to learn. If it is simple, as in languages such as Lua, this can be really nice.

There is no issue with learning two mental models. Programmers learn hundreds in their lifetime. The tricky part is that in a polytyped system these two models need to be unified. At run time, a function really must return the same type as the one the compiler inferred, otherwise everything will break. Although not always obvious, this unification can sometimes come at a cost. Often the type model must be made more complex, the computation model must, or the expressiveness of the language reduced.

For example, the computation model of duck typing is really simple. The method name is looked up in a dictionary and if it exists, the method is found and called. Easy to understand, easy to implement, great! But if we want duck typing in a polytyped system we need to introduce more things to unify this computation with the type system. We could introduce Type Classes, or Union Types, and get the user to declare things explicitly, but there is no way to add it for free, exactly because of this unification.

Type systems fit the model of computation really well in almost all cases, but not perfectly. They require extensions and awkward additions in some edge cases. To get around this polytyped languages tend to squeeze their computation model to fit the type model better. Designers of unityped languages on the other hand can squeeze their computation model to fit to other things they like, such as natural language (Python), string operations (Perl), or lists (Lisp).

Another alternative is shown in languages such as Haskell, which have used concepts such as pure functions and lazy evaluation to keep the type system simple. This has complicated the computation model, up to the point where understanding how Haskell code actually executes can be very difficult. I like extreme languages, so I really like Haskell, but I do feel sorry for the suffering of the developers of GHC.

But most polytyped languages fall somewhere in-between, letting you express some common things, and restricting expression others, being sort of type-ish and sort of computational. I find this middle ground a frustrating compromise.



Polytypes are less extensible

All features in polytyped languages must be language features.

A type system must be necessarily simpler than it's computation model, otherwise it would contain just as many errors as it is trying to remove. One way to keep a type system simple is to limit it's extensibility.

Computation models on the other hand should not be limited. There shouldn't be a program that cannot be expressed by the computation model. The computation model should be as expressive and useful as possible! Here unityped languages have the most flexibility, because the computation model does not need to be unified with a type system. They can add complex extensions at a whim, or let users define extensions themselves! To add computational features to polytyped languages the type model often requires hard-coded unification which means changes to the compiler.

Again this probably does not bother Harper, but as someone who likes exploration and freedom of expression in computation, it bothers me.



Unitypes are worse is better

I prefer to work with polytyped languages, and I have no doubt that they are the right thing (tm). But they have suffered from a bad case of worse is better. Unityped languages are far more simple to create, and often more simple to program in. They work well enough, and so have been adopted and spread. They have obvious flaws, but users have contributed workarounds and solutions to their many problems. Just like the Unix virus they have spread, adapted and evolved.

It isn't true that unityped languages are a subset of polytyped languages. The type system does not define the language, and having a unitype has knock-on effects in all other areas of the language - syntax, semantics, implementation, design, even style, practices and community. This makes them distinct.

In my mind the best argument for polytypes is still the original, practical argument. That type systems, although far from perfect, are still the best way we know of finding errors at compile time. For this they are fantastic and I appreciate all the work that has been done on them by people such as Harper.

github twitter rss