ikiwiki life mac projects tech this site web wii

The eHarning Blog of harningt

Archive

Post interval: graph

Posts per month this year: graph

RSS Atom Add a new post titled:

Ref: ?cryptoface

In the progress of designing a cryptography wrapper, I've found a tricky dichotomy in how libraries can handle processing data streams. From my findings, there's at least three different ways in which data can be managed:

  • Chunk processing - int f(in, inlen, out, &outlen)
  • Callbacks - int f(in, inlen, cb)
  • Stream abstractions - int f(stream_in, stream_out)

In designing a wrapper that can handle implementations that implement code in many fashions, it's a tricky balancing act to figure out how to best work. One must consider performance and the importance of a clean but powerful API.

Chunk Processing

An example of basic chunk processing is that provided by mhash. You feed it chunks of data, until the end where you get your data out. This is very flexible, however it is quite a bit harder to implement when there are multiple transformations and/or when the number of input chunks does not necessarily match the number of output chunks.

Callbacks

An example of callback handling is with MS CAPI's CryptMsgOpenToDecode. The general workflow is that you setup a state machine and feed it data, when it has data it can output, it will call your callback function with data as it can. You are then responsible for copying the data out and putting it wherever it needs to be. This is a powerful option, however it makes rewrapping as chunk-processing a challenge.

This can readly wrap chunk processing at the cost of an extra location to store the output buffer (although this is most likely how native callback handler systems work out).

Stream Filter Abstractions

An example of stream abstractions in use are OpenSSL's PKCS7-sign and Crypto++'s hash-filter. In reality, these are just complex wrappers around a callback system... however they can provide clean theory and unification of both input/output handling.

Stream filters can readily wrap chunk processing in the same way that callbacks can... just take the input data from the chunk and pipe it forward.

Resolution

Since streams can usually be treated just like callbacks, the design can effectively consider them equivalent for the purpose of wrapping. To best deal with the different I/O mechanisms and deal with the differing libraries in a most efficient way, providing both APIs may be the best option. The underlying wrapper interface will be able to take advantage of the native interface, while differing APIs can take advantage of a smart common codebase to help manage the differences.

This "smart common codebase" would be code managing buffers/streaming/etc in order to deal with the situation where too much data is available for the chunk processing, but the callback/filter has already used up the data in providing the additional data.

If there's any other paradigms for filtering/processing data, please let me know, either by commenting or emailing me. I'll post an update when possible with more information, as I'm certain there'll be many interesting in a one-lib to crypt them all and make dealing with cryptography less complicated... since the library will take care of the nasty details and distilling them to a single unified interface.

Posted Thu Oct 8 02:11:12 2009 +0000 Tags: ?projects ?tech

Due to problems with wiki-spam, I have removed the 'opendiscussion' plugin from this Ikiwiki installation.

Until there is a reasonable captcha setup for Ikiwiki, it is required that you obtain an OpenID in order to post.

Sorry for the inconvenience... but it's much better than having an ugly history of 1000 bogus posts.

Posted Sat Aug 30 02:07:46 2008 +0000 Tags: ?web

Now that I've found a little more free time for personal projects, I'm going to try to blog about project progress and discussions... well... monologues until other's put their valuable 2-cents in.

In the past I've worked on putting together a network protocol for a project in Java. One of the issues I ran into was the apparent lack of a library like libevent or libev to make lightweight handling of multiple clients possible.

Apache Mina takes a different approach to network programming such that you effectively construct data handlers and let the framework handle data flow.

I thought that Lua and C could take advantage of such an approach in that concepts such as Separation of Concerns and basic abstraction. I began architecting Lumina to fill the void.

Some design choices/requirements have made this a little more challenging, but have important reasons.

Must compile as C

This requirement limits some code reuse particularly in that many basic data structures are necessary, such as dynamically growing arrays and maps. Also object-orientation must be hand-made. glib offers some of these abstractions for C, but it seems an overly large dependency in this framework.

So far this does not preclude the potential use of some other language that can preprocess into C, so long as it creates intelligible output and does not require writing extensive glue.

Must provide cross-platform interface

Basically this means that dependencies need to be well thought-out and where necessary be replaceable behind the scenes. The next requirement for the ability to have multiple potential implementations makes this easier.

Must have flexible implementation options

This effectively means that the dependencies for the core should be at an absolute minimum. If the core interfaces are used, swapping out the backend (libevent/libev/IOCP) shouldn't affect the library user's experience.

Must provide a Lua-accessible module

This decision will require some minor dicipline in interface design, however it will make the framework that much more usable.

Any suggestions as to resolving any of the issues pointed out would be greatly appreciated.

Posted Mon Jul 21 04:57:53 2008 +0000 Tags: ?projects ?tech

We finally decided to get a Wii with the introduction of the Wii Fit.

I have to say that even though it's technically inferior to the more expensive and graphics-intensive XBox360 and Playstation 3... it more than makes up for it in unique gameplay. What other game system integrates Bluetooth, accelerometers, pressure sensors, hardware-accelerated crypto and Wifi into a unified experience that also brings legacy games to the table?

The $40 Wiimote packs quite a bit into the little white shell. It's packed with an infrared camera, 3-direction accelerometers, Bluetooth connectivity, and an extensible IO port to connect w/ a "Nunchuck" or "Classic Controller".

The Wii Fit brings what is called the "Balance Board" as a fourth controller to handle a series of pressure sensors to gauge balance and weight. More proof that an extensible Bluetooth-enabled console can be quite effective for creating entirely novel input devices to the table without kludges.

Look for more posts related to Wii games and other neato things.

Oh yeah, forgot to mention the lil' story of how we got the high-demand/low-supply system two weeks ago. We went to Meijer to check if they potentially had any Wii's available... and they didn't really have any. However, someone had JUST returned one and a worker was bringing it back. Lucky us that someone didn't want their Wii... for no apparent reason, since it wasn't even opened.

Posted Sun Jun 15 01:41:41 2008 +0000 Tags: ?tech ?wii

Shortly after getting our Wii, Jenn managed to find a Wii Fit at Walmart in another interesting story. She went to Walmart and asked if they had any Wii Fits in and wasn't laughed at... instead the lady stated that they had some in the back that they didn't intend on putting out on shelves. Sounds like some unhappy workers are slackin off :-P

Jenn ended up getting two, just in case someone we knew wanted one and we knew how insane the demand is.

Onto the fun... Wii Fit is a game paired with the Nintendo Balance Board in such a way that it works on throwing even more gameplay conventions out the window. It challenges players to be conscious about weight and balance in such a way that it encourages (and itself forces*) activity.

The caveat to the activity it encourages/forces is the fact that people can "cheat" the system... but then it's just a wasted novelty. One can have a little extra fun in the "Island Run" by jerking the Wiimote up and down extra fast near the beginning to get your Mii to follow dogs rather than another Mii, one of which makes you jump off a cliff in your run.

All in all, I have to say this was a sound investment. We play almost every night and track our weight (not an entirely accurate measurement, but it shows trends). It has plenty of activities that attempt to enhance strength and balance as well as aerobics and yoga. I'd probably never do something like Yoga or step aerobics in a normal setting, but with the Wii Fit, I've found them to be somewhat fun and physically challenging.

If anyone wants to buy the extra Wii Fit off of me, just let me know via email or discussion... I'll be sure to put in the discussion if it's sold...

Posted Sun Jun 15 01:41:41 2008 +0000 Tags: ?tech ?wii

I've finally setup a blog using Ikiwiki as the backend. Namely this pulls together the capabilities of the blog plugin in Ikiwiki to aggregate pages based on some sort of specification (namely non-discussion pages under the blog/ directory are considered blog posts.

Basically this means that I now have a safely backed up place to post all my blogs/etc and not have to worry about some blog server dying and all my ideas, comments, and other stuff going kaput. How it's backed up is a topic for future discussion, namely how git is used as a backend for storage/etc...

I plan on using this blog to post about my life, technical involvements/discoveries/topics..., and stuff in general.

Posted Tue May 13 22:18:54 2008 +0000 Tags: ?ikiwiki ?life ?tech

While checking out the new stuff on Ikiwiki, I noticed a mention of "NearlyFreeSpeech.NET" web hosting support added in the recent version... The name and the fact that there was something "special" enough about it to evoke a mention about Ikiwiki support drew me to check out and see what it was all about.

The most interesting fact about this web hosting service is that you pay exactly for what you get... No ugly large monthly/yearly fees. It also has some impressive features available, such as Lua, PHP, Python, Ruby, Perl.... + ssh access.

The /only/ downside that I can see is the fact that its hosting does not allow for persistent processes (ex: FastCGI, mod_perl, ...) so certain things will be slower and may flat-out not work. On the other hand, you get what you pay for.

Their basic pricing model is this:

  • $1.00 per GB per month of bandwidth (cheaper if you use more)
  • $0.01 per MB-month of disk space

Pretty cheap! Especially since many sites charge ~ $30/month and give you some 30GB/month transfer (maybe less, maybe more)... making you pay for bandwidth you'll not likely use... and even if you did manage to use it all up, you could get extra charges or your site taken down until you pay up extra.... not to mention the fact that NearlyFreeSpeech.NET reduces the price-per-GB once you go above 1GB usage.

This is probably the 'perfect' web-hosting for those just getting started up w/ websites due to the low entry barrier. It's even good for those who have heavy-traffic sites with the bulk-scaling.

I plan on moving this web site there as soon as possible since it'll be a reliable web location, though I'll still have the local site available at the current location (at home)... especially since I probably won't fully move GIT up there since I'm not sure on how much they support it.... I'll have to check that out and report in another post on that.

Posted Tue May 13 22:18:54 2008 +0000 Tags: ?tech ?web

About a week ago I got a MacBook Pro for work on various projects that required extremely deep integration in OSX. It's a fairly recent Core2Duo one with Leopard loaded on it.

I have to say that for the longest time, probably since I ever got mutually acquainted with Windows and Mac at school, I've never been comfortable about the behavior of Macs. This, I think, stems from the one-button mouse and it being "different"... Sadly such an odd annoyance blew itself up into unreasonable discomfort w/ Macs... even when a two-button mouse hooked up to one added the 'real' right-click behavior expected... and (later on) Linux itself was always different... even to itself...

In the meantime between elementary school and now, I hadn't had long exposure to Macs except for testing. I got acquainted with Linux, Solaris, and some other flavors of Unix in my computer science curriculum at Michigan State University, but virtually nothing touched Mac... As many may know, I got hooked on Linux early in my college years and checked out many of the interesting ways you can manage applications/windows/etc.

Now... onto the MacBook Pro experience.... After taking some time to figure out how all the things worked in OSX at the user-level, things started to make sense to me. Just like getting used to the differences between Windows and Linux UIs available, the OSX UI had its own peculiarities. Once I understood/worked-around the differences, work with OSX became significantly more likeable... even preferred many times.

I'd have to say that OSX combines many of the things found in Windows and Linux (though its alot more likely that Windows takes Apple ideas.. but that's another story) and puts it into an extremely stable/friendly environment. It has some of the UI and system unity that Windows strives for... and also a fully integrated *nixy environment that makes putting OSS software in extremely easy. More truthfully, OSX shrinkwraps a custom conglomerate of Open Source Software (ex: Darwin) and oodles of gloss (Aqua + the builtin software suite) into a user-friendly/fault-tolerant system.

In short here's my view of the different basic systems:

  • Windows - it's the common machine that must be used in certain situations
  • Linux - it's a GREAT high-performance testing ground for neat technologies, server software, and oddly-enough games (often running Windows ones better than 'real' windows can)
  • OSX - the stiff-but-friendly system that keeps software fallout extremely contained
Posted Tue May 13 22:18:54 2008 +0000 Tags: ?mac ?tech

I hadn't ever posted any information about this site's policies and capabilities... so I doubt anybody knew where they could post and whatnot....

Here's a general list of what's setup on this wiki/blog:

  • Anybody, logged in or not, can edit the discussions page associated with all pages
    • Any spam on the discussions page can get your IP banned
    • Significant spamming may result in more obstacles put in the way (ex: require OpenID)
  • Anybody logged in with an OpenID can edit nearly any page... (example locked page: blog posts)
  • You can mirror/get a full history of the wiki by pulling git://git.eharning.us/wiki
  • The site's contents are under the Creative Commons Attribution-NonCommercial license (in case anybody wants to copy my texts for profit... just contact me and I am quite likely to work something out) with the general exception of patches where required and code/software
  • User accounts are exclusively via OpenID for simplicity, feel free to check out http://openid.trustbearer.com/ for a secure OpenID provider w/ hardware tokens and support for some national ID cards.
Posted Tue May 13 22:18:54 2008 +0000 Tags: ?this site