Parrot 0.1.0 has left the building, and is out on its way around the world, thanks to the tireless effort of the p6i folks. Snag it from http://www.cpan.org/authors/id/L/LT/LTOETSCH/parrot-0.1.0.tar.gz.
Lots of new stuff in there. Grab it, play with it, have at all the new stuff.
Oh, and we can play tetris now too, if you've got SDL installed.
If you've been hanging out on perl6-internals you probably know, but if you haven't and actually care, here's a quick rundown of where Parrot's object system stands right now. Note that this is not the final state, just the state we're in and will be for the 0.1.0 release.
At the moment you can:
Namespaces are also a bit dodgy, though not too bad. (Just don't use multi-level namespaces right now) There's no method cache yet, so things'll be a bit slow for right now. And IMCC has no syntax to support objects so you've got to manage it all by hand at the moment, though it's really not too bad. Details of the system are in PDD 15 if you're so inclined, though not everything that's documented in there currently works. (And there are, I'm sure, things that aren't in there that need to be)
With this release I think we've vaulted parrot firmly into the mid-'80s. (Alas not the early '70s, since OO stuff seemed to go firmly backwards for a few decades from the high-water mark that Smalltalk set, but we're getting there) More of the missing stuff, especially AUTOLOAD and operator overloading/tying, should be in 0.1.1, but we'll see where that goes.
1 Or, as .NET calls 'em, properties. (I think. Maybe not, opinions vary) Object slot variables. Class-private "every object of this class has this thing in it" things.
2 Which continues the method search as if the method that was actually invoked didn't exist
3 Which continues the method search as if the class the method that was invoked is in is the base class for the object
Or at least mostly done.
Objects, that is. Everything I planned to have them do for the next release is done, so...
Time to stress-test for the roll out for 0.1.0, and enjoy Parrot's crunchy object-y goodness! Enjoy!
And yeah, I know--constructors, destructors, objects that masquerade as other PMC types, AUTOLOAD (or other fallback method providing mechanisms), cross-type inheritance, and multimethod dispatch would all be nice. Hey, you take what you can get, right? :)
It's now open for business and the Call For Papers is out.
Head over to the website for more info. And put in a proposal for a paper. Really, it's fun.
why the lucky stiff has written up a (poignant) guide to Ruby. Definitely a trip but, then, so's why. The Lambda folks are less impressed, but that's only because they're not any fun.
As is whatever the module for Apache 2.0 is.
I finally broke down and snagged an RSS content aggregator thingie (which, in addition to confirming my feelings that polling for RSS feeds sucks more than Cygnus X-1, now makes me want RSS feeds on places that don't have them. EurekAlert and The New Scientist spring to mind) since I've got enough places I go infrequently that I was starting to lose track of which ones I'd been to lately and which I hadn't. 'Tis keen, though I'd like to be able to twiddle more stuff than the tool allows. No joy there, but as it's freeware I can't rightly complain.
Anyway, the tool (NetNewsWire Lite) has a keen little statistics window you can pull up to take a look at bandwidth stats, 304 counts, and whatnot. Included in this little gizmo is a count of how many times you got gzipped data back from the site in question.
Now, I'd forgotten that I'd enabled mod_gzip on the server ages back. I did it for response-time reasons--I've only got a 128K upstream link, and a friend's got a pretty image-heavy and markup-heavy website hanging off this box, and the full-text feeds for this blog (Which some annoying folks are getting every time. Sheesh, people, welcome to the 21st century! Either HEAD the feed or introduce yourself to the If-Modified-Since header!) get big. Smushing the data, even a for a few people, helps response time for everyone if you've got CPU time to burn, and I generally do. I didn't realize how much of a difference it makes, though. According to the stats, it cuts down the feed size by about 75% or so.
What surprised me is the number of sites that don't have compression enabled.
Now, I can understand not doing it for a number of reasons. It does put an extra load on your server, so if you're CPU bound it may not be a win. Still... you'd think more folks would do it. Setup's dead-simple, so far as I can tell there aren't any side-effects for folks that can't handle it, and it's a one shot set-and-forget deal. It definitely makes a difference in the amount of data transferred, both out of the server and into the client. Less load for you and snappier response for your readers, 'specially if they're on slow links.
If you've not done it, and your provider lets you, try it. You may well like it, and if you pay by the megabyte transferred your bank balance may definitely like it. Won't shrink the images, but every bit helps. XML, XHTML, CSS, and HTML all smush down quite nicely, FWIW.
(And a late addition--if you're writing an RSS syndication tool and it doesn't accept compressed streams... consider making it so it does. There's plenty of code around to uncompress the compressed data)
I really need to get things together and finish the time-limited black hole route system I keep thinking about. Digging through the logs recently I've been finding that there are patterns in there to be teased out--systems that constantly hammer me with viruses or bang on the webserver with attempts to post comments to non-functional cgi programs. (Yeah, I left mt-comments.cgi around and just marked it non-executable) While it's not a lot of traffic, it's annoying traffic, and in the case of the virus bombs it's repeated over and over.
I could just install a blackhole route for these things, but that's got two issues. FIrstly it goes away when my system reboots and, while that's not all that common, it does happen. Second, I'm not really comfortable with automatically generated black hole routes being effectively permanent, lasting either forever if they go in the config files or until reboot otherwise. For snort-generated routes that's mostly OK, but past that, well... seems a bit much.
What I want is a database where I can throw an IP address or block in with an expiration date, and have the block last until it expires, across reboots and resets, presumably with a little daemon that spins its wheels, updating the list every 10 seconds or so. (Maybe every minute, dunno) While that shouldn't be that tough, I've just not gotten around to it, and I really ought to.
Besides, then I could in good conscience write an RFC with the text "The server MAY install a null route for clients which violate this restriction. Null routes MUST be temporary, with the route lasting no more than one minute for the first violation. A warning period equal to the duration of the lifetime of the null route MAY be imposed after the routing is restored, and the nul route lifetime MAY double if another violation occurs within this warning period."
I think I could enjoy that one. Pity I desperately lack the time I need to finish even the design of the rss polling replacement system it'd be a part of.
Elpheba: So you lied to them
Professor Marvel: Elpheba, where I come from we believe all sorts of things that aren't true. We call it history
STR, I suppose. Or not.
Leading the charge to alter the state constitution to restrict marriage in Massachusetts to one man and one woman is none other than the Governor, Mitt Romney. Mitt's a Mormon (one of the many splinter sects of christianity). You remember the Mormons, right? They're the group that triggered the passage of the original federal (and many state) marriage laws in the first place because of that whole polygamy thing...
This could just be ~150 years of payback, I suppose. (I'm not sure that getting the support of gay groups in the 1800s would've done much for their cause)
So I'm sitting here working on a "write a compiler for parrot" article for O'Reilly, partly because they asked and partly to nail down all the bits of compiler info that I've gathered as I've been working on the compiler for the office.
Y'know what?
Compilers are easy.
No, really, they are. Not, mind, that I'm saying that optimizers are easy, as they're not, sorta. (Well, OK, they're actually not all that tough to implement, once someone else has done the hard work of creating the abstract optimizations. That, like any other brain-warping creative act, is damned difficult, though the simplicity of the result often makes it seem less tough than it is. e = mc2 seems pretty simple too, though you'd be hard-pressed to find more than a small handful of people who could come up with it if someone hadn't already. But I digress) But compilers? Nah, no biggie.
Sure, there's some art to it. Creating a grammar that produces a nicely usable result makes it much easier, but that's not too tough to do once you've done one or two and gotten a feel for it. I heartily recommend implementing your own simple compiler before tackling a more complex one. You'll spend less time and less effort writing a simple compiler then a complex one than you would just writing a complex compiler. (I do not kid) Still, if I managed to figure it out (and I hate parsing) you can. Useful hint -- grammar rules should either produce a simple value, delegate to one of several single rules, or be funky. It's easier to handle:
parenexpr: '(' expr ')'
simpleval: parenexpr | stringconstant | numericconstant
than it is to handle
simpleval: '(' expr ')' | stringconstant | numericconstant
since in the former case the simpleval rule can unconditionally delegate to one of several simple rules, each of which have just one form, while in the latter case it needs to figure out which form it's got and which piece to delegate.
And once you've gotten that nice grammar to build you a tree of nodes? Then the compiler just emits a boilerplate header, delegates the processing of each statement node to its handler (which probably delegates to its handler, and then emits a boilerplate footer.
I keep waiting for the other shoe to drop. This stuff really can't be that easy, or everyone'd be doing it all the time. I have this nasty feeling that I'm underestimating the amount of work that goes into writing a parser generator like Parse::RecDescent or yacc here. (I do have some idea how much trouble it is to write an engine to target, but honestly Parrot (and .NET, and the JVM) aren't that difficult to do. Sure there's a lot of work there, but not much complexity)
Ah, well, I need to write faster, both so I can finish the article and nail down the grotty bits of grammar and compiler creation before I forget.
At this point the wave of mydoom/novarg/sco virus mail's sorta abated, and everyone who has a clue has their mail going through virus/worm/trojan filters before it gets to them, so our mailboxes are mostly blissfully virus free. As that leaves more MTA CPU time available for slinging Nigerian Scam mail around I'm not sure whether it's an overall improvement in the state of the 'net, but you take whatever you can get, I suppose.
Unfortunately, while I'm not getting the viruses, and the spam gets generally caught and filtered, there are still all those damn "the mail you didn't send has a virus in it!" messages flinging it around. Gee, thanks, it's so nice to know that the message I didn't send was infected. (Note to mail admins--if you haven't checked whether you're sending infection notices, go check, dammit! And turn them off if you are)
Thanks to the folks at attrition.org (pointer courtesy of the DC perlmongers) I've now got some spamassassin rules for them. While I'd love it if SpamAssassin allowed for multiple scores per mail message, rather than just a raw spam score (as I'd toss these damn things entirely, rather than quarrantining them for later unread deletion) I'll take this, as it's better than nothing. Doesn't catch the "user doesn't exist" messages, but for better or worse I'm OK with that. While they suck, at least some of them are legitimate, so filtering them out may do actual harm.
Now if everyone'd just get on the ball and implement SPF so we could all, in good conscience, toss more of this crap that'd be truly swell...
So, I finally bit the bullet and paid for Ecto, the most excellent upgrade to Adriaan Tijsseling's Kung Log. No longer donationware (which I expect most people donated--I admit, alas, that I didn't) but payware with a two-week trial period. And I have to say... this is darned sweet. Being able to manage multiple blogs at once (with the odd blip here and there--drafts don't store which account they were associated with so far as I can see) is really nice, especially now that we're looking at using Movable Type as an in-house project support tool. Using MT certainly beats weekly (or "whenever I remember") status report postings--whenever something of interest pops up it can be logged and saved, regardless of where I happen to be when it occurs. Definitely nice. We run RT in-house as well, but this sort of thing's less-well suited for RT, and being web-based RT requires having on-line access to the server.
Now, thinking about it, if someone built an off-line or semi-off-line RT tool, akin to Ecto for web logs, that'd be very nice. Maybe I'll dust off the CamelBones stuff and take a shot at it. (Or, given what I've got for time, maybe not... :)
I finally took a few minutes to put in place the last anti-blog spam recommendation folks had--I renamed mt-comment.cgi to something else. Turned out to be as trivial as I thought. Just had to add the line:
CommentScript mt-despamcomments.cgi
to my mt.cfg file, rename mt-comments to mt-despamcomments.cgi, and rebuild all the files. No biggie.
'Course, this means that upgrades are going to be a pain in the neck, as I'll have to remember to do the renaming on each upgrade. No more drop-in upgrades, alas. It's probably time (or past time) for an install script for Movable Type that does this. OTOH, since I'm not paying for it, I'd say it's a bit much to expect. :)
Hopefully this doesn't screw up anyone's links, but the permalinks are fine so I can live with it if it does.
It's alive! Or, at least, officially on the web. September 15-17, Belfast, Northern Ireland. Short of something going horribly, terribly wrong at work, I'll be there.
Now to work out transport details. (It's a lot cheaper to fly into Dublin from Boston than to Belfast, but then there's that whole pesky "get to belfast" thing...)