- biruda - output spool files must contain arch, platform and host, now they collide! - later have a job number per worker output file - start: workers names clash, so start must say which one to start. This depends on the architecture and the platform, so this is part of start. There can be multiple worker on same platforms/architectures on different coordinator hosts, this is ok: this is load-balancing - http_lib: - POST - handle answer body too - replace json-c with an embedded and thread-safe libcjson as in Tegano, especially the Windows port of json-c is shacky - ideas for KISS deployment (which we would do additionally to normal packaging): - build static binaries for certain platforms, e.g. a biruda-static package - grub2 grub-reboot - easy way to have multi-boot linux versions (if we don't want to chroot or virtualize) - how to we configure this? - how can we access the configuration zone of grub from other operating systems? most likely we have to build something on our own. Last resort is that a biruda on for instance FreeBSD just reboots back to Linux (one shot grub2 menu entry selection and boot and then falling back) - cli: - have nicer error messages than "ERROR: HTTP error -3 more philosophical and in flux: - have a client-side embeddable web server for biruda web server, so it can auto-deliver the client side. This means zero-install, which is good. examples: => https://github.com/davidsblog/rCPU => http://smoothiecharts.org/ is really nice though, uses HTML 5 canvas - sort out adressing mess - what is unique? - worker architecture and platform: a) Can be the same if we isolate a build in a chroot b) architecture the same, platform different. For instance using the same kernel and run different userland (Centos 6 on Arch) c) different architecture, for instance 32-bit Arch on 64-bit Arch d) different arch and platform: all other cases, 32-bit chroot with Centos 6 on ArchLinux 64-bit, The coordinator tehchnology is ortogonal to this: whether we use thread without chroot, thread with chroot, docker, vmware, qemu, xen or whatever.. - why don't we have architecture and platform as part of the worker data? This seems plainly wrong.. How can we now what platform or architecture the worker is working on without declaring it or executing a biruda --guess-env? But this is the idea: have a configuration option for archcitecture and platform, defaults to the one the one of the coordinator (how, as this one has no idea about the workers belonging to him?). The we can have commands which execute 'biruda --guess-env' to handle the generic case. This command would always have to be executed, but jobs come in one? And if we execute biruda, how? we need to install it into the worker environment? - inject a static binary or - have special packages ready which we will install into the worker environment (which leads to the nice hen-and-egg problems again) - coordinator ARCH, HOST, variables can only be expanded before the worker is being executed! We can differenciate workers based on their host, but that's not really interersting when running the worker. - simple chroot mode runs biruda wrapped around the worker. In all other cases the worker is an executed binary. This means the binary itself and its configuration have to be available. - configuration can be copied there or it gets send to the worker. - we should really check out what 'etcd' is doing, oh. go and the first thing I see is a system('curl'). mmh. I think not. :-) - docker has similar approaches to inject itself first and then execv the real binary. This is an option. - chroot can't be part of the command, it's a property we have to know about. when building the image (ok, questionable whether this should be part of biruda, but it is now).