You are viewing mmol_6453

Michael Mol
08 July 2013 @ 09:15 am

At least, for now. I just watched something I honestly hadn't seen since 1997, when I was running Windows 95 on a machine that didn't even have 2D acceleration. I just watched something appear row-by-row as it was being drawn on the screen. I don't think it was supposed to look like that.

My GPU is built-in to my Pentium B940. This GPU has more oomph to it than old and trusty TNT2 from back in the day. And I just watched it act like a 200MHz K6 blitting, pixel-by-pixel, a start menu to a 1MB framebuffer on the wrong side of a 16MHz 16-bit bus.

This is just wrong. It smacks of the days when I was still on dial-up in 2005, and every web dev out there was coding up pages optimized for their cozy DSL and cable (or worse, LAN) connections. Instead of dozens of graphic files (with the odd autostart video), though, now I get to deal with excessive CSS animations being kicked off before pageloads even finish.

People, the loading of your page doesn't need a full songe and dance number. If you want to draw attention to UI elements, one of the most effective things I've seen done comes from mobile video games; have the UI element pulse until it's acknowledged. You don't need it to zoom onto the screen like an 80s stunt driver.

 
 
 
Michael Mol
03 June 2013 @ 08:19 pm
My brother wanted to know how to manage a large open-source project when, in his words, you have programmers who can come and go with little warning. Here was my response:

  • All successful open-source projects have a small group (even if it's just one) of developers who are dedicated. Either because they believe it, or because they're paid. The members of this small group review and accept (or reject) contributed changes to the project.

  • All successful open-source projects have a public bug trackers where people can submit bugs.

  • All successful open-source projects have a public communications channel. Sometimes more than one. At least a mailing list and/or a forum, since they both represent a persistent record of conversation. Most projects aren't large enough to support both at the same time. Often, there's also a real-time group communications channel, such as IRC or Twitter.

  • All successful open-source projects need to involve something the *volunteer* contributors believe in and/or use; something needs to convince them that it's worth their time.

  • All successful open-source projects have a community of users.

  • Contributions come in many forms. Sometimes it's in the form of assisting other users. Sometimes it's in the form of code. Sometimes it's in the form of bug reports. Sometimes it's donations. Sometimes it's community mediation. Sometimes it's writing documentation. Sometimes it's testing for bugs. Sometimes it's advocacy.

 
 
 
Michael Mol
09 April 2013 @ 07:15 am
So I've got kvm working fine under Gentoo.[1] Right away, I know that I want to start moving system services off of wash (my Debian router which is to be retired) into VMs. Since I have VMs to work with, it naturally follows that I can split each service into its own VM for sandboxing purposes; squid will get its own VM, the Samba4 AD DC will get its own VM, etc. And since I've already got two Gentoo boxes on my network (my primary laptop and my primary desktop/home server), why not Gentoo as a VM guest?

Well, there are a few plausible reasons.

The first and foremost, there's the small issue of compiling...when you're compiling All The Things, sometimes you hit upon a package that likes to consume All The RAM in the process. When you're building VMs, you want to give the VM as little RAM as necessary for it to fulfill its function effectively. Obviously, there's a bit of a disconnect; it doesn't make sense to build something whose build process requires 6G of RAM when the normal services on the machine won't take more than 512MB.

Granted, it's only LibreOffice and Chromium whose build processes like to consume RAM like a leaky malloc in an inner loop, and I don't have a need to run those in a VM, but it feels safer to head off a possible trend toward heavier build processes conflicting with other VM requirements...which is a strong argument against Gentoo as a VM guest.

A second issue about running Gentoo as a guest is the sheer size of portage. Portage is the set of files representing the database of available packages and versions, and it can be anywhere from 1GB to 16GB.[2] The *default* disk image size when creating a VM in virt-manager is 8GB, and that's an issue.

The first issue, that of compiling, is resolvable using binpkgs; you can build a package in one place and install it in another place. So I'll probably wind up using binpackages compiled where CPU and RAM is plentiful.

The second issue, that of the size of Portage, is also fairly easily resolvable; I can mount the guests' portage database and distfiles from the host via NFS.

Still, it sounds like a lot of work. What's the gain?

Let's start with what makes Gentoo nice in the first place: I don't have to enable features when I don't need them. Look at any Debian or RedHat server, and you'll find loads of code that isn't mission-critical. X11 libraries, graphics munging libraries. Media. When you'd like to fit useful services into slivers of RAM on a server, it makes a difference.[3]

Gentoo also tends to have more recent versions of packages available than Debian/testing or (for sure) RedHat/CentOS. Yes, you can run Debian/sid, if you desire; I don't care to. Yes, you can install EPEL on RHEL/CentOS and tack on a number of additional repositories, and rebuild from srpms the packages you still can't find in a useful state...Gentoo makes coping with that kind of problem easy and more automatic.

Finally, at this point, it's the distro I'm most familiar with. I can make it dance and sing.

[1] I've never done this before primarily because my work-related interests at the time encouraged me to learn and work with Xen...now they don't. I may well play with Xen again in the future.
[2] The database is well under 2GB, but downloaded packages get stored in a subfolder, similar to /var/cache/apt/ on Debian.
[3] Perhaps there once was an argument that this wasn't at all relevant in a production environment, but if you think about it, the Modern Cloud is about running dozens of instances of the same thing, with more instances added as required to meet demand. The more you squeeze out of an instance, the better your system utility.
 
 
 
Michael Mol
08 April 2013 @ 11:19 pm
So, virt-manager and I had a fight today. There were several incorrectly-configured connections showing up which I couldn't persistently delete (they'd show back up on next start), or which I couldn't delete at all (this may be more related to libvirt hanging the UI on connection completion, but it was hard to tell).

I looked, found ~/.libvirt, didn't see anything of interest. Looked, found libvirt under ~/.config, but didn't see anything there. Seartching /etc/libvirt/libvirt*.conf for pieces of the offending connections didn't turn up anything.

But I'd forgotten about ~/.gconf, the place where GNOME used to (and apparently still does, despite the presence of ~/.config) stick settings for it and its applications. So I had to emerge gconf-editor, go in, and dig through there, find the presence of the erroneous connection strings under an autostart key and a regular connections list, and remove those values. (If you stumble on this post while having similar difficulties, I've no doubt you'll find what you're looking for.)

That is all.

(Well, OK. I'll gripe about something...why doesn't virt-manager allow creation of storage pools, network devices and other such things? virsh shows that it's possible to do those things, so why doesn't virt-manager?)
 
 
 
Michael Mol
26 March 2013 @ 08:23 am
For those in the know, what's the significant difference between the purposes of pkg-config and libtool? Why are these two different things?
 
 
 
Michael Mol
Quoted from the Gentoo-user mailing list:
On 2013-03-14, Dale <redacted> wrote:
I was wondering. Has anyone ever seen where a test as been done to compare the speed of Gentoo with other distros? Maybe Gentoo compared to Redhat, Mandrake, Ubuntu and such?
I just did a test, and they're all the same. CDs/DVDS of various distros dropped from a height of 1m all hit the floor simultaneously [there are random variations due to aerodynamic instability of the disk shape, but it's the same for all distros]. If launched horizontally with spin to provide attitude stability (thrown like a frisbee), they all fly the same.

Actually, the entire thread has good material so far. Spoiler alert: Most Gentoo users who've tested it don't see runtime speed as their #1 reason for using Gentoo.

 
 
 
Michael Mol
06 March 2013 @ 09:27 am

Before you say "LMGTFY" in response to a public query, here are four things you should consider:

Search engines tend to return results similar to things you've clicked on in the past, so if your previous searches have been fruitless, your future searches become more likely to be fruitless as well, as the search engine favors resources it's seen you visit.

The asker may have made a public query because they prefer recommendations from people who might have had some experience with the problem. It's better than trying fifteen solutions of varying degrees of functionality and bitrot. The Internet rarely forgets anything--including answers and solutions which aren't even remotely helpful!

When searching with query string Q, it's not unusual to encounter forum postings where someone says "let me google that for you", followed by a link to a Google search using string Q...which is how you found that post in the first place! (This is the single largest reason I use DuckDuckGo first, and Google only if DuckDuckGo doesn't turn up anything useful!)

LMGTFY is a snide and rude response.

 
 
 
Michael Mol
02 February 2013 @ 09:35 pm
So, our garbage disposal hadn't worked in a while. There was a switch under the sink which sorta worked at first, then had to be held in the "ON" position to work, and then had to be held in the "ON" position harder to work...and eventually it simply stopped working.

The existing gang box was grungy, so I elected to replace the whole thing. Went to Home Depot, picked up a gang box, a switch, some wire and a faceplate.

Get down under the sink, and...crap. I realize I really don't want to replace the existing wire, because I can't even see where the power source is. That wire sneaks away behind the dishwasher, where I can't get at it for the time being. So, fine. We'll leave the existing wire in place. It's dirty, but it's not bad...12-3 NM.

Unscrew the old faceplate, set it aside.

Unscrew the old switch...the guy used two different screws. And that bottom one is not the right kind of screw...no wonder the thing had some spring to it; it wasn't attached. Anyway, detach and set aside.

Neutral line was hooked together with a big red wirenut. Detach and set aside.

Unscrew the old gang box. The guy used drywall screws to screw it into particle board? Weird. Unscrew the insert clamps and set them aside.

Screw in the new gang box, use 1" #10 sheet metal screws. Pop out the bottom and the bottom-right knockouts...oh, right. Insert clamps. Go to insert the insert clamps...er, these are a bit small. Oops. Hm. I'm going to need 1/2" insert clamps...except I didn't know what they were called at this point, so I pored over the book I got for home improvement projects, and found what they were called. Also saw that you're supposed to use a yellow wirenut when connecting two 12ga solid-core wires, rather than a red one.

Go back to Home Depot, pick up 3/4" insert clamps nd a wirenut kit.

Come back, set up the insert clamps, get everything wired up, go to screw in the switch...crap. The bottom screw of the switch bumps into the insert clamp for the bottom knock-out. No wonder the previous guy used a different screw for the bottom portion of the switch.

Well, if I don't do this right, something's going to get screwed up in the long run. I have to use a side knockout. And per code, I can't reuse a box with a knockout knocked out but unused...not even with some kind of plug. So, back to Home Depot to pick up another box, this time with 1/2" knockouts, and a couple 1/2" snap-in insert clamps. (Which aren't turning up on their website.)

Come back, remove the old new box, put in the new new box, punch out two side inserts, pull the cable through and tighten things down, wire up the switch, screw in the switch, screw the new faceplate on, energize the circuit, and...I have a working garbage disposal again! Yes!

After a bit, I realized why it stopped working in the first place. With the old setup, the bottom of the switch wasn't affixed at all, so the springiness was the wiring compressing and decompressing. With the neutral line connected with a too-large connector, every time the wires were pushed, the connection got looser, to the point where they weren't connecting at all. Of course, the bottom of the switch wasn't affixed because the insert clamp on the bottom was in the way. So, if either a decent wire connector had been used, or if a proper-sized wire nut had been used, the wiring probably wouldn't have failed.

(Still plan on moving the switch, but not at this moment.)
 
 
 
Michael Mol
26 January 2013 @ 11:07 am
MakerBot Replicator 2

Coal gave us the Industrial age, the age when energy was suddenly cheap, and we could make things year-round in climates that were useless for agriculture. The Atom bomb gave us the Nuclear age, which changed the worldwide political landscape. Sputnik drove us into the Space age, where "the sky's the limit" became a historical in-joke. Then came Internet, and with it the Information Age, the age of ubiquitous, instantaneous communication.

And now we're about to enter the Material Age, the age of ubiquitous industry and building. 3D printers are awesome. They stand to do for mundane physical objects what the Internet did for information. And, really, by making mundane physical objects using information, well, things are about to get real interesting.

Some parts can't be printed. Chief among these are stepper motors and extruder heads. And these are pretty expensive. So what do we need? Obviously, we need the ability to make stepper motors and extruder heads. Designing a printable kit for automating production of stepper motors from wire, metal dowels and sheet metal should be among the top priorities of anyone pushing hard for 3D printers to take off. Extruder heads are the next step, but I know far less about them...

We need vending machines that print from a catalog of parts. These would be great for dollar stores printing trinkets, hardware stores printing replacement parts, and toy stores printing Lego(tm) knockoff pieces. This is something we have the tech to do now, nd it should take less than a month for a team with a printer, a computer and a payment processor to assemble a working prototype.

And now on to the hard stuff.

We need to build assembly machines with parts bins for memory addresses. They can print parts, store intermediate assembly stages in part register, and do full assembly for complicated objects. Think "CPU", but with physical objects, and with waldo movement sequences for CPU instructions. I can imagine the entire "structured computing" revolution of the 70s applying to part printing and assembling. I suspect much of the work involved will be coming up with standardized part mating interfaces. Lego (tm), Kinex (tm), Erector (tm) and others are all familiar examples of standardized parts mating interfaces, but you'll obviously want something stronger and less modular. Perhaps take a look at what modular furniture is doing, and move much of those kinds of interfacings into automated printing and assembly.

We need 3D copy machines. They don't have to be perfect, and they'd undergo continual evolution. Start with a 3D analog to a silhouette for a first generation. Math geeks, help me out. I'm sure there's a name for that. Second generation would use depth sensing to handle somewhat complex concave models. I don't know what 3rd generation models would be expected to do.

We need to be able to control the color of the extruded plastic on the fly. Existing systems use different color supply plastics, and if you want a different color, you feed a different material. We need a colorless plastic, and we need an extruder head that can mix dyes into the plastic while it's in its molten state, to allow a variance of color during the printing process. Further, we need the ability to recycle printed parts safely on a small scale, and neutralize these dyes during the recycling process.

And now just a personal desire...

I need a slurry pump I can drive using my cordless drill. That would make it trivially easy to clean the snow and ice off my pitted and pocked driveway; just spray salt slurry on it. :)
 
 
 
Michael Mol
21 December 2012 @ 03:55 pm
Still in my initially-enamored phase, but these are very nice: Howard Leight R-01526 Impact Sport Electronic Earmuff



Not just useful for a gun range; I heard them kick in as I slammed a stubborn door shut. Their 'audio restoration' circuit can be turned up to make it easier to hear things around me than I could without the cans  on. Or I can turn them off, and have a very nice, quiet environment. I've currently got them plugged into the laptop, listening to Pandora as I write this.

And they keep road noises to safe level while I wait for the bus, and they keep my ears warm in the cold wind.

Only downside so far...other people looked a tad freaked out seeing me walking or standing on the street while wearing them.