This post is about a little tool that looks like this:
As I have recently moved to Sweden and decided to finally learn myself some Swedish I quickly ran into the problem that typing foreign characters on a Windows keyboard can be a pain. Windows provides some ways to do this:
Use alt-keycodes You need to remember a specific alt-keycode, and even if you do you need to actually have a num-pad on your keyboard (I'm using an Apple wireless keyboard).
Use character map First you have to find a character, which is not necessarily easy, then you have to copy-paste it. No go.
Switch keyboards Again unworkable. If I'm chatting in Skype, say, I want to be able to talk Swedish to some people but English to others. Plus I'm using the Dvorak keyboard layout, and I generally just don't want to switch these all the time.
So I hacked up my own tool inspired by Apple's iOS and OS X, which I've (very originally) called SymWin. You can find it here on GitHub: https://github.com/mjvh80/SymWin. Note that currently you need to build it yourself (there is a script to do this, however).
When typing you can now use the CAPSLOCK key to enter symbols. I've chosen this key as an initial option because I never use it anyway and most other shortcuts on Windows are already used for something else. Further, SymWin can be temporarily disabled by clicking the tasktray icon, after which the CAPSLOCK key works again as it always did.
SymWin uses a Win32 API to detect the location of the caret in any application, however, not all applications make use of the Win32 API. For example Google Chrome or Mozilla Firefox do not (depending on which version you have). In this case I'll display the symbol choice in the center of your currently active screen.
If you find bugs or issues please report them on my GitHub page, or open a pull request for new features or bug fixes.
Download: I've added a "release" to GitHub from where you can download pre-built binaries (provided as-is).
This post describes an alternative git branching "model" that is useful if you're dealing with settings files that are part of the repository itself and you would like to be able to have different local settings with which to develop without ever committing these to the public repository.
It is based on the idea that for every feature we develop, we use a feature branch that can be deleted whenever the feature has been fully merged, with the following tweaks:
The model is simple: for every feature, say foobar, we create 3 branches: foobar, foobar_settings and foobar_dev. And yes, this is an immediate drawback: lot's of branches, however, I have not found this to be problematic and it is easy to use a script to delete merged branches.
foobar is the "master" branch, and it is the branch we'll use to push to the remote repository and to create a pull request from in GitHub (say) foobar_settings is the "settings" branch, and it holds commits that update local settings, changes that should never be visible on the remote repository foobar_dev is the development branch, it holds commits that update local settings and new development changes, this is where you work
Update local settings and commit using an easily recognisable commit, e.g.: git commit -am " *** LOCAL SETTINGS COMMIT ***"
Create the dev branch: git checkout -b foobar_dev
Perform dev magic
To release: git rebase --onto foobar foobar_settings foobar_dev, What this effectively does is: rebase all commits in foobar_devexcept those in foobar_settings to foobar.
git checkout foobar
And finally: git merge foobar_dev to prepare a pull request (this simply fast-forwards). Now, either the current situation is good or you've made one too many local commits that shouldn't appear publicly, we can "squash" these using a soft reset: git reset --soft <<sha before dev rebase>>; git commit -am " The one commit message to rule them all"
You're good for a pull request now.
Now all of this seems like a ton of work, however, automating this with (say) PowerShell makes it very easy indeed.
If I'd like to start work on a new feature, I simply type
Git-NewFeature -feature new_awesome_feature
This automatically does git fetch origin followed by steps 1-4 above (also applying a local script to update settings automatically, as part of the process).
To "release" I simply type git-release with optional -squash option, this performs steps 6-7.
Finally the GitHub API makes it very easy to automate doing a pull request, so for that all I have to do is:
Git-PullRequest -title "foobar has landed" -body "Blah blah"
If a pull request is already open, all one needs to do is git push <<remote name>> foobar to push new commits to it.
Staying Up To Date
Finally, in order to stay up to date with the remote master we should do the following when on foobar:
git pull origin master (say), no rebase pull is needed because the merge commit won't be picked up in the pull request
Again this is easy to automate, I've called my PowerShell function Git-Update with flag -pull, it does steps 1-3 above automatically.
Tweaking Settings
So you're on dev, and you want to update a settings to work with locally. There are many options but I usually use:
Edit the settings file
Commit this file only with a clear local commit message (something easy to recognise so that if it makes it to GitHub it is easy to spot and remove (see below))
Somehow I got my local setting into my pull request.
If this happens, it is easy to fix. Simply go to your foobar "master" branch, and find out how many commits since the mistake, say 5, and do an interactive rebase:
git rebase -i HEAD~6
then delete the commit *** LOCAL SETTINGS COMMIT *** in the opened text editor and save and close it. Finally, override the pull request remotely:
Inheriting selective Xunit tests from another assembly
Use case: I have a single assembly which implements several xunit tests, but want to add Visual Studio projects with new tests but also selectively add tests from the "base assembly".
To do this we'll support an enumeration and assembly level attributes to specify which tests to run in the "super" tests. E.g.
By which we indicate: we support the "GenericSearch" feature and its tests but not the GenericPagination one.
Attempt 1
Simple, override Xunit's FactAttribute as we see above and override its EnumerateTestCommands method to see which general facts were included on the assembly by setting the IncludeGeneralFact attribute on the test assembly.
Two problems with this: Xunit does not look for the FactAttribute with inheritance of attributes, second, we need to get at the actual assembly being tested to be able to get those IncludeGeneralFact attributes.
Attempt 2
Ok, this needs more work. Doing some serious Googling I got no further so one has to resort to reading Xunit source code, which, thankfully, is easy to read. Diving into the source we see that ultimately there is a method GetCustomAttributes on IMethodInfo that is used by Xunit to find all [Fact] methods. So we need to find a way to get our own wrapper in there.
In turns out that one way to do this is to use a custom "runner" and add a RunWithAttribute to a class, something like this:
Here AttributeExampleBase is the class that contains the tests above. Of course, another way to do all this is to simply inherit from the class whose tests you want to inherit (e.g. a SearchGenericTests class etc.)
So we have to implement InheritTestClassCommands. This class will just be a wrapper where every interface call we'll simply forward to an instance of TestClassCommand, which is the Xunit native implementation. The only real change lies in its TypeUnderTest method which will look like this:
public Xunit.Sdk.ITypeInfo TypeUnderTest
{
get
{
return _mWrappedCommand.TypeUnderTest;
}
set
{
_mWrappedCommand.TypeUnderTest = valueisWrappedTypeInfo ? value : newWrappedTyeInfo(value);
}
}
The WrappedTypeInfo wraps the ITypeInfo implementation provided by Xunit. The only change there is that we want to wrap IMethodInfo, so the only methods with any beef are:
publicIMethodInfo GetMethod(string methodName)
{
var innerMethod = _mInner.GetMethod(methodName);
if (innerMethod == null) returnnull;
returnnewWrappedMethodInfo(innerMethod, this);
}
publicIEnumerable<IMethodInfo> GetMethods()
{
var innerMethods = _mInner.GetMethods();
if (innerMethods == null) return innerMethods;
return innerMethods.Select(m => newWrappedMethodInfo(m, this));
}
Thus we're wrapping IMethodInfo with WrappedMethodInfo, where the only change is the fact we want *inherited* attributes, thus:
So, that's it. Well, almost. Xunit's implementation depends on the Equals and GetHashCode methods being implement on IMethodInfo (which is strange and asymmetrical if looking at ITypeInfo in my opinion). All they need to do is say sure I'm equal to the other fella if my method is the same, i.e.:
It is probably also a good idea to ensure that ToString is forwarded in all cases.
Finally, we need to communicate the ITypeInfo to the methodinfo so that we can get at it in the actual customized fact attribute. This we can simply do passing it to the constructor of WrappedMethodInfo. The custom Fact attribute then looks like this:
publicsealedclassGeneralFactAttribute : Xunit.FactAttribute
{
privatereadonlyGeneralFacts _mFact;
public GeneralFactAttribute(GeneralFacts fact)
{
_mFact = fact;
}
protectedoverrideIEnumerable<Xunit.Sdk.ITestCommand> EnumerateTestCommands(Xunit.Sdk.IMethodInfo methodInfo)
{
var method = methodInfo asWrappedMethodInfo;
if (method == null) returnbase.EnumerateTestCommands(methodInfo);
// Get the assembly under test.var asm = method.TypeUnderTest.Type.Assembly;
if (asm.GetCustomAttributes(typeof(IncludeGeneralFactAttribute), inherit: true)
.Cast<IncludeGeneralFactAttribute>()
.Any(att => att.Facts == _mFact))
returnbase.EnumerateTestCommands(method);
returnEnumerable.Empty<Xunit.Sdk.ITestCommand>();
}
}
That's it, we can now specify which tests we want inherited in our special projects.
If there's an easier way to do this or if you want more info I can always be found on twitter @marcusvanhoudt.
Executing 64 bit PowerShell from within a 32 bit PowerShell session
Due to legacy COM components I run PowerShell in 32 bit on 64 bit Windows. All fine, that's just a question of starting the right Powershell. From within my 32 bit PowerShell script I want to use PowerShell cmdlets to administer IIS. Now it seems they in turn use COM, but the 64 bit variant - so we'll get relatively obscure errors about unknown COM CLSIDs etc.
Doing some serious amount of Googling I came accross the Start-Job PowerShell cmdlet, which can start a job and has a magic -runAs32 switch. But this is the wrong way round. As 32 bit processes cannot always run "as 64 bit" if they are run on a purely 32 bit machine they did not provide the switch it seems. So we have to do some work ourselves to make it happen.
Attempt 1.
Found by more Googling: find the 64 PowerShell executable and invoke it directly. The powershell.exe executable accepts a command Script Block, however, due to the way we invoke it we have to convert a script block into a string. This causes issues if the script block closed over local variables for example. Furthermore, there seems no way to pass arguments to the script block.
The only gotcha here is that the System32 folder (which holds 64 bit executables in general on 64 bit Windows), actually does *not* contain 64 bit PowerShell. To find the executable we can use the following:
Another way to try to do this is to connect to a 64 bit PowerShell session on the local machine, and execute our code there. The nice thing about this way of doing it is that we can call Invoke-Command with a session to run the code in and an ArgumentList parameter to pass arguments to the script block.
The downside of this approach, however, is that a Windows service must run which provides the necessary communication support for PowerShell as theoretically this can execute a script on the other side of the planet. The script below simply enables this if creating a session fails.
function Ps64([scriptblock]$block) {
$machineName = [Environment]::MachineName;
try {
# Note: The configuration name is what forces it to 64 bit.
Understanding the SynchronizationContext in ASP.NET.
The SynchronizationContext is a weird animal, and it took me a little while to understand it properly (at least I am under the impression that I do). There are many posts about the SynchronizationContext, and its use thereof, good references are:
Now, what is the SynchronizationContext useful for in ASP.NET?
Looking under the hood with dotPeek (who uses Reflector these days), we can see that there are in fact two contexts used (at least at the time of writing): LegacyAspNetSynchronizationContext and AspNetSynchronizationContext. Both of these are internal classes, exposed through the SynchronizationContext.Current property. Note that the latter is internally in the Microsoft code referred to as the "task friendly" one. You can enable it by setting the following appSettings property in your web.config:
Do not simply adjust this setting as it can lead to deadlocks! Where your code worked before, it may not work after - see below for a discussion.
Both these classes do roughly the same, although one is "friendlier" for the new task based async stuff in the latest versions of .NET. They are important because
they ensure the correct user is set on the thread (important in a web site)
they make HttpContext.Current available on the new thread
they do other stuff I'm not even aware of
As an aside on HttpContext.Current, it is stored in something called the "CallContext", which uses the "IllogicalCallContext" to store the property. I'm not sure what's so "illogical" about this context other than that it's not the "logical" call context, but the important thing to note is that this context does not "flow" to new threads. By default (unless suppressed), ExecutionContext flows from thread to thread, ensuring things like security is set correctly on new threads etc. Now the logical context flows, however the illogical one does not. So HttpContext does not flow as part of the ExecutionContext. If I have to guess I think this is a good thing because HttpContext is not thread safe (no it really isn't). If it would flow I could create some new thread and the context would simply flow to it, but now I'll access it in a thread unsafe manner. As a word of warning: note that SynchronizationContext does in general flow as part of the ExecutionContext, it is just that .NET internally often does *not* flow the SynchronizationContext. These methods are internal and thus we cannot avoid such "flowing" ourselves.
So why is access not thread unsafe whenever I use the SynchronizationContext? Whenever an asynchronous operation completes it is supposed to post the AsyncCallback to the SynchronizationContext, this post operation does the following:
enter a lock (i.e. only one operation is completed at a time)
reinstate HttpContext.Current, the current User and some other properties
execute the callback under this new context
Note that the newer ASP.NET SynchronizationContext does not execute under a lock as such, but posts the callbacks in a chain of Task continuations. I.e. post A creates a Task such that when post B happens it becomes a continuation on post A, such that they execute one after the other in order and not at the same time.
As I mentioned above using this new context can lead to deadlocks where none were previously present. For example if ASP.NET is executing a step of the pipeline it queues a task to the context that only completes when the step completes. This means that any asynchronous tasks that post to the synchronization context won't execute until this synchronous step is complete. This in turn implies that I cannot wait for a Task to complete if it uses the new synchronization context, whereas before this worked just fine. As far as I can tell there is no solution to this problem other than to use the old context (please leave a comment if there are actual ways of achieving this).
So, for me this means:
if HttpContext.Current returns null somewhere in your ASP.NET code, you have not been using the SynchronizationContext correctly
as such "singleton" objects are best stored in HttpContext.Current.Items, see Cup(of T).
no need to worry about thread safety of HttpContext because the callbacks always execute in order (never at the same time)
You may wonder if all these callbacks execute one at a time, doesn't this defeat the whole asynchronisity etc?
ASP.NET provides each request with a thread (note that we won't necessarily stay on the same thread), this is the thread we want to do our computationally bound work on in almost all cases. The callbacks we are talking about here are "computational", and so we want them to execute on the single thread for this request. The reason to go asynchronous is because we want to "yield" our thread to an IO operation, in other words as soon as our thread is waiting for IO it may as well return to the threadpool to do other useful work until that IO is completed. This way we can scale better because we don't have threads blocking for IO operations (such as reading files, contacting web services etc. etc.).
Recently I wrote a fast object serializer in F# with dynamic code generation using the .NET DynamicMethod class and its ILGenerator which allows us to generate the CIL for the method. The serializer simply enumerates all the members of an object that should be serialized, and generates code for writing these to a stream (encoded in a certain way to reduce space overhead, e.g. by using variant integers à la Google protocol buffers).
There are many ways to generate this kind of code in .NET. One way is to use Expression trees, another is to generate actual source e.g. C# and going csc on it, and yet another is to use a DynamicMethod or a dynamic assembly/type. I chose the latter as it allows me to generate exactly the IL I want to generate and I'm a bit of a masochist. To illustrate this point, emitting code to call an interface method but using the opcode Call instead of Callvirt will cause funky, seemingly inexplicable errors - anything from the runtime destabilizing to it getting an epileptic fit.
F# has support for monads, which it calls "workflows" or "computation expressions" as that is no doubt more marketable. F# workflows allow us to write F# syntax, but interpret it in a custom way. In essence F# workflows are syntactic sugar for a series of function calls. I won't go into more details here (there's plenty on the web), so let's look at how we can use this to make code generation a little more pleasant. Consider the following bit of F# code:
asm {
let array = asm.ILGen.DeclareLocal(arrayType)
let ret = asm.ILGen.DefineLabel()
// ...
yield OpCodes.Dup
yield OpCodes.Stloc, array
yield OpCodes.Ldlen
yield OpCodes.Stloc, len
for i:LocalBuilder in (0, len) doyield OpCodes.Ldloc, array
yield OpCodes.Ldloc, i
yield OpCodes.Ldelem, elementType
// ... etc.
yield Label(ret)
}
The nice thing here is that this looks a little like C++ syntax for inline assembler. It is also more readable than a lot of method calls on an ILGenerator and in particular the for loop is elegant. The for loop shown actually emits the IL for a for loop from 0 to the local len. Finally, it is particularly finicky to get this working so in theory we could build in some more safety checks than that which ILGenerator provides (however we haven't done this).
asm in the code above is an F# computation expression backed by the following builder type:
type EmitBuilder(ilgen: ILGenerator) =
member this.ILGen = ilgen
member this.Yield (c: EmitOp) =
match c with
| Call m -> ilgen.Emit(OpCodes.Callvirt, m)
| Label l -> ilgen.MarkLabel(l)
| Goto l -> ilgen.Emit(OpCodes.Br, l)
member this.Yield (u:unit) = u
member this.Yield (o: OpCode) = ilgen.Emit(o)
member this.Yield (inp: OpCode * int32) = match inp with (a, b) -> ilgen.Emit(a, b)
member this.Yield (inp: OpCode * LocalBuilder) = match inp with (a, b) -> ilgen.Emit(a, b)
member this.Yield (inp: OpCode * Type) = match inp with (a, b) -> ilgen.Emit(a, b)
member this.Yield (inp: OpCode * FieldInfo) = match inp with (a, b) -> ilgen.Emit(a, b)
member this.Yield (inp: OpCode * MethodInfo) = match inp with (a, b) -> ilgen.Emit(a, b)
member this.Yield (inp: OpCode * Label) = match inp with (a, b) -> ilgen.Emit(a, b)
member this.Delay (x: unit -> unit) = x
member this.Run x = x()
member this.Combine (x, y) = y()
member this.Zero (u:unit) = ()
member this.TryFinally (body: unit -> unit, final: unit -> unit) =
try
ilgen.BeginExceptionBlock() |> ignore
body()
finally
ilgen.BeginFinallyBlock()
final()
ilgen.EndExceptionBlock()
member this.For (exp: 'a * 'b, body: LocalBuilder -> unit) =
let start = ilgen.DefineLabel()
let finish = ilgen.DefineLabel()
let loopVar = ilgen.DeclareLocal(typeof<int32>)
let (from, length) = exp
this {
match box from with
| :? int32 as i -> ilgen.Emit(OpCodes.Ldc_I4, i)
| :? LocalBuilder as l -> ilgen.Emit(OpCodes.Ldloc, l)
| _ -> failwith "invalid from type"yield OpCodes.Stloc, loopVar
// etc.
Similarly we can add support for if, while and other F# language constructs.
How does this work?
F# translates the code within the curly braces to method calls on the asm EmitBuilder instance. The code above translates to something along the lines of: asm.Run( asm.Delay(fun() -> asm.Combine( asm.Yield(OpCodes.Dup), asm.Delay(fun () -> ... asm.For((0, len), fun i -> ...
Clearly this isn't very nice to write, let alone nice to read. The downside of our monadic sugar may be that it isn't always obvious exactly what is going on where.
For your perusal I put this code on Github, find it here.
I was working on a custom expression compiler that had a simple interface such as:
Func<TContext, TOutput> Func(String expression);
defined in an appropriately generic class.
The problem, however, is that the resulting Func does not really show us anything of the original expression that "went in".
Now, Microsoft provides hooks for customizing debugger display, which is great.. but limited.
First I thought it may be possible to attach an attribute "at run time", as I am using Linq Expressions trees to achieve the compilation, perhaps I can add a DebuggerDisplayAttribute to the result. It appears, however, that there is no API for this (it seems that this *may* be possible if one uses ILGenerator, so I find it odd that this is not available if that were the case).
So we can't attach an attribute to the Func delegate type, so perhaps a custom delegate type would work. In some ways this is a shame as I don't like deviating from the Func type but this seemed a small sacrifice for a much improved debugging experience. The benefit of a custom delegate type would have been that we could add a DebuggerTypeProxyAttribute to proxy to our debugger class. Unfortunately this attribute cannot be applied to delegate types.
If we must use DebuggerDisplayAttribute, the problem becomes: how do we access data about our expression given a delegate instance? The class is (damn it) sealed, so we can't create our own. The solution I came up with is to wrap the compiled func in a delegate that is instance bound to a nested (internal) instance as follows:
In other words we return a delegate that has its "this" bound to an instance of _ExpressionDebugContext. We can display properties of this using the debugger string "{Target.ToString()}", where Target refers to "this". The actual delegate returned simply passes through to the func that we compiled.
Downside: there is probably a small performance overhead due to the extra call. In my case, however, this was well worth it as this kind of performance concerns are not relevant, but a good debug experience is! Furthermore let's just assume the JITer kicks ass and inlines all this stuff.
Finally, LambdaExpression does allow us to compile to a MethodBuilder. MethodBuilder inherits from MethodInfo, and MulticastDelegate should be able to build a delegate given a MethodInfo instance. This is a bit more work as creating a MethodBuilder takes some effort, but may be another angle to do this.
If you found easier/better ways to do this, I would be very interested to hear from you!
I once thought it a good idea to define System.Func delegate types in a .NET 2.0 library. Due to this unfortunate cock up, this library is now hard to use in a .NET 3.5 project due to qualified name clashes.
You would say, easy, just add an alias to System.Core or the library. Adding one to the library is indeed easy, one simply opens the properties for the reference in Visual Studio, and adds an alias name and then uses this alias by putting the following at the top of the C# file:
externalias ActualSystemCore;
Now we can refer to our Func type using
ActualSystemCore::System.Func<...
Great, but can we do the same with System.Core? It turns out this is hard because Visual Studio implicitly always adds a reference to System.Core to your project, which means you can't set an alias for it.
I tried adding the reference, but Visual Studio says it is already added, even though you can not see it explicitly in the references folder of the project explorer.
I tried editing the .csproj msbuild file by adding:
Further, by switching on the Visual Studio build option to allow for diagnostic MSBuild output (found under Tools > Options > Project and Solutions > Build and Run), we see that this property is being set to System.Core. I assume Visual Studio sets this property using a command line parameter for MSBuild along the lines of /p:AdditionalExplicitAssemblyReferences=System.Core or something similar.
Thus our own System.Core reference is removed, and the explicit reference is re-added. This has the unfortunate side-effect that the System.Core alias we added is removed. To fix, we add
to our csproj file to override this property and set it to empty. This means we do need to add System.Core manually, as shown above with our alias set.
Our project with the alias for System.Core now builds, everyone is happy.
Last year a javascript competition gained a fair bit of publicity, enough so that even a non-front end developer such as myself found out about it. Not having done any 3D graphics I thought it'd be interesting to try something new: 3D graphics using javascript and canvas, no WebGL or some other. The actual 3D rendering turned out to be quite easy indeed, the hard part mostly being squeezing the whole thing into 1024 bytes of javascript, with only a canvas element provided.
I decided to recreate the Amiga ball demo as here:
Although the old demo wasn't actual 3D in the sense that they used a rotating colour palette to achieve the effect, I would build a 3D version in the sense that I wanted to set out rendering an actual 3D model of cartesian coordinates by projecting it onto the 2D canvas.
First things first, how to do the projection. This is actually much easier than I thought: consider a camera looking down at your model in 3D space. The camera is looking along it's line of view, we simply need a change of coordinates with the origin becoming our camera, and the z-axis its line of sight. The projection onto the 2D plane, the x,y-plane, can then simply be read off as the x,y coordinates.
To perform this translation, we need to shift the camera to the origin and then rotate it. The camera's line of view can be described by two angles: yaw and pitch (assuming the camera looks at the origin). My interpretation of these may be a bit liberal to say the least, but with yaw I mean the rotation of the camera in the x,y plane, as illustrated below. Pitch is the rotation of the camera in the x,z plane. It does not really matter how we define these angles, as we'd otherwise just get differences of π and possibly rotations in different directions. Thus, to move the camera to the z-axis we have to rotate it by -yaw in the x,y plane, and pitch in the y,z plane. Any rotation is applied by simply multiplying through with the rotation matrix.
Now that we can project onto 2D space, we need a model to render. To make things easy we create a simple model of polygons with an associated colour: the model is an array of arrays of 4 elements, namely 4 coordinates (x,y,z) and a colour for the polygon.
In order to create the model, we need to create the Amiga sphere, and the grid denoting the room in which it is bouncing. The sphere itself can be easily generated using polar coordinates which are then subsequently translated to Cartesian coordinates. In polar coordinates, the sphere, being at the origin, can be described by two coordinates, φ and θ. φ moves along from pole to pole, and θ moves across the equator (counter clockwise, as is the default mathematical direction). The top of the sphere is thus (-π/2, 0), naturally using radians. The sphere is generated by splitting an arc of π in some parts of π, 16 in the demo. We still need to determine the colour of each polygon in the model. In integer parts, p and t, a polygon is white iff (p + t) mod 2 is 0 (note: starting at the northpole, we add parts of π/16 to build the sphere). The first polygon is thus p = 0 and t = 0 which is white.
Next, we need to add the grid to the model. In order to reduce code needed, we draw lines in the grid as "hooks" starting at a point move down 1, then move right 1 and move back left. This so that we can use the same polygon model as elsewhere. As this polygon has essentially line width it will look like a line.
Then we need to actually render this, Canvas itself required another coordinate transform as Canvas has the origin in the top left with the y-axis moving down. Also, as the origin isn't centered, we need an origin translation to account for this. I got this wrong at first causing a bit of headache to find the problem.
Simply rendering the above, however, one encounters a common problem when doing 3D graphics: ensuring that whatever is in front hides whatever is in the back. There are several established solutions to this, one is calculating for each point whether it is hidden or not, whereas another is simply sorting all "points" on distance from the camera. This algorithm, called Painter's algorithm or z-sort, is simple to add and we pick the first point of each polygon to z-sort on (this is good enough for our case). Unfortunately this requires the use of a function: lots and lots of text and we only have 1024 bytes to play with! Finally, we should not forget to only draw those polygons that are actually in front of the camera, not behind it, i.e. for which we have z > 0 (to deal with rounding errors this became z > 0.1).
We now have a working 3D bouncing sphere. It'd be even nicer if we could add perspective for that splash of realism. It turns out perspective is also very simple to add: simply divide x and y coordinates by z. Doing this straight off, however, makes the perspective a little too skewed and unrealistic, so we divide z by 250 (trial and error) to reduce its effect and make it look real. Coincidentally this is also the distance we used between grid points, so that we could reuse the constant value (dw in the code).
Parametrizing the origin of the sphere and the location of the camera allows us to change the position of both for each render loop.
To spruce the animation up a bit we let the sphere rotate, this is easy as we can simply tweak θ during model generation to cause the sphere to spin around it axis. The ball bounces by updating the location of its origin: X, Y and Z and adding an increment each time (possibly negated when a wall is encountered).
Varying the yaw allows the camera to move around the scene, nicely demonstrating perspective and z-sort in action. As I am already calculting cos(yaw) and sin(yaw) for this camera rotation, I could use these for an extra rotation on the ball model for the tilt rotation (the rotation of the ball in the y,z plane).
Finally the ball rotation direction changes on every bounce with the wall.
Getting all this to fit was quite a challenge, though after squeezing enough bytes out I had some room left and I had the idea to add some scrolling text. This can be achieved by viewing each square in the a line of the grid as either 0 or 1. A 1 indicates it's "on", say purple. Dividing the text "JS1K" in 5 lines, I could now encode each line as a number, and we simply have to test for each square in the grid whether its bit is on or off. At first I thought of encoding each number as a unicode character, however, using the jscrusher optimizer simply using an array of integer worked better, and the unicode characters ended up being 4 byte ones anyway, plus the use of "charCodeAt" adds a lot, and I mean a lot, of bytes.
Squeezing the kilobyte
The hardest part was getting the whole thing into one kilo byte, in terms of bytes and not characters as the competition so clearly stated. Obviously we use a minifier of some sort, if only to strip whitespace. Better ones do much more of course, such as eliminating dead code etc. Some experimentation showed that the Google Closure Minifier worked the best for me, better than the Microsoft Ajax minifier or some Yahoo equivalent.
A first version, which had no more than a static bouncing ball (no moving camera, no scrolling text) was about 2.5k, minified of course. Reducing it to 1.5k was easy but then getting the rest off proved hard.
Some of the tricks employed:
minimize the use of "function", inline functions wherever possible (the minifiers I tried couldn't get rid of functions)
define _ = Math
refactor the math to perform all matrix multiplications in one expression, rewrite negations to avoid minus signs, factor out constants etc.
use a function to declare variables, so rather than var foo = .., use function bar(foo) { ...
use expressions such as foo && bar rather than if(foo) { bar }
use .5 rather than 0.5, this was one thing that had to be manually edited in the Google Minifier result
use the comma operator to ensure an if statement needs no curly braces: if (Y <= radius || Y > 359)
V = -V, d = -d;
use inline assignment, ie foo(tmp = cos(theta))
use 3 digit RGB colors e.g. #fff
use a single double for loop to render both the sphere and the grid, most work went into doing this
leave out semicolons and (curly) braces wherever possible
use the same width and height for the canvas, context.width = context.height = w.
use context[fillStyle] instead of context.fillStyle, where fillStyle is a function argument set to "fillStyle" and the fillStyle variable will be minified to a single letter variant
use a post increment sphere[tmp++], which saved a single line and thus a semicolon (alhough we could leave that out anyway) and curly braces for a for loop which remained reduced to a single statement.
x << 1 instead of floor(x / 2).
write (floor(i/2)+floor(j/2)) % 2 as (i|1) + (j|1) & 2
Having gotten all of this to fit, I learned of a little script by @aivopaas called JSCrusher which looks for patterns in a script, in order to reduce the size further. This uses "eval" which can be considered nasty, in fact subsequent JS1K competitions banned its use. I replaced this eval with a setInterval in order to get the animation going. One other nasty is the use of the "with" statement which saved me a few bytes, so I managed to use two of javascript's "bad parts" but I chose features over elegance of code here.
All of this to produce this:
(oh, click here to start things, it's a bit intensive).
P.S. These images were drawn using Google Docs, thought I give it a try, not sure it's that good.
The javascript to render this is (actual post was hand compressed further)
_=Math;X=Y=Z=150;V=3;W=a=r=0;d=0.15;U=2;counter=15; function R(u,s,m,n,g,v,y,k,l,o,c,e,b,i,f,w,p,j){function t(h,q){return[m*s(h=_.PI/2-h*y)*s(q=q*y+r)+X,l*(p=m*s(h)*u(q))+k*(j=m*u(h))+Y,l*j-k*p+Z]}function x(h){for(b=0;b<4;)h[b]=[n-((j=h[b][2]-n)+(w=300*l+180-h[b][0])*l*0.3-(p=h[b++][1]-n*k-n)*k*0.3)/(j=(w*l-p*k-j*0.3)/n),n+(w*k+p*l)/j,j];j>0.1&&o.push(h)}X+=X<m|X>436?(d=-d,U=-U):U;Y+=Y<m|Y>436?(d=-d,V=-V):V;Z+=W=Z<m?-W:W-1.5;l=s(a-=0.04);k=u(a);r-=d;o=[];counter=++counter%69;for(c=0;c<32;c++)for(e=0;e<32;e++){c<g&&x([t(c,e),t(c+1,e),t(c+1,e-1),t(c, e-1),(c|1)+(e|1)&2?"red":"#fff"]);x([[i=c*g,f=e*g,0],[i+g,f,0],[i+g,f+g,0],[i+!(b=1&[5495,5444,3444,5396,5492][counter-c-33]>>e-9)*g,f+b*g,0],b&&"#30C"]);x([[0,i,f+g],[0,i,f],[0,i+g,f],[0,i+(b=1&[5495,5444,3444,5396,5492][e-32+counter]>>c-9)*g,f+b*g],b&&"#30C"])}o.sort(function(h,q){return q[0][2]-h[0][2]});with(v.getContext("2d")){b=v.width=v.height=550;fillStyle="#999";fillRect(0,0,b,b);for(c in o){strokeStyle=fillStyle=(f=o[c])[4]||"#90C";beginPath();moveTo(f[0][1],f[0][0]);for(e in f)lineTo(f[e% 4][1],f[e%4][0]);stroke();fill()}}}
Add a comment