Wednesday, December 30, 2009

Solving Asterisk DTMF callerid issues - or why i love Open Source

I have been busy the last few days with an issue that has come up, with a new batch of GSM FCT's we need to interface with some of our deployed Hermes e-IPBX units.

Hermes e-IPBX is a solution we have worked on for sometime now and basically is a build-from-scratch linux distro tailored for use with Asterisk as a PBX.
It's a "compact" solution, requiring around 32MB of flash for the entire system including a custom web based UI we have developed.

For a number of reasons that would take a lot of time to explain (and make excellent material for another blog post) our current version of Hermes e-ipbx uses Asterisk 1.2 and not the latest stable 1.4 or 1.6

The problem we faced, was that we could not get callerid working with these new FCT units.
As it turned out these units were using dtmf and not fsk to pass the callerid info between the first and second ring.

So at the begging we though, "hey that's not a problem, asterisk supports dtmf for callerid, all we have to do is change the cidsignalling variable in zapata.conf to use dtmf."
We could not be more wrong...

No matter what configuration we tried in zapata.conf, asterisk was throughing errors and we could not get the callerid info.

-- Starting simple switch on 'Zap/1-1'
Dec 28 00:07:29 ERROR[3896]: callerid.c:276 callerid_feed: fsk_serie
made mylen < 0 (-1)
Dec 28 00:07:29 WARNING[3896]: chan_zap.c:6627 ss_thread: CallerID feed
failed: Success
Dec 28 00:07:29 WARNING[3896]: chan_zap.c:6671 ss_thread: CallerID
returned with error on channel 'Zap/1-1'
-- Executing Wait("Zap/1-1", "5") in new stack
Dec 28 00:07:29 DEBUG[3896]: chan_zap.c:4001 zt_handle_dtmfup: DTMF
digit: 9 on Zap/1-1
Dec 28 00:07:29 DEBUG[3896]: chan_zap.c:4001 zt_handle_dtmfup: DTMF
digit: 1 on Zap/1-1
Dec 28 00:07:29 DEBUG[3896]: chan_zap.c:4001 zt_handle_dtmfup: DTMF
digit: 0 on Zap/1-1
Dec 28 00:07:29 DEBUG[3896]: chan_zap.c:4001 zt_handle_dtmfup: DTMF
digit: 5 on Zap/1-1
Dec 28 00:07:31 DEBUG[3896]: chan_zap.c:4907 __zt_exception: Exception
on 14, channel 1
Dec 28 00:07:31 DEBUG[3896]: chan_zap.c:4092 zt_handle_event: Got event
Ring Begin(18) on channel 1 (index 0)
Dec 28 00:07:32 DEBUG[3896]: chan_zap.c:4907 __zt_exception: Exception
on 14, channel 1
Dec 28 00:07:32 DEBUG[3896]: chan_zap.c:4092 zt_handle_event: Got event
Ring/Answered(2) on channel 1 (index 0)
Dec 28 00:07:32 DEBUG[3896]: chan_zap.c:4441 zt_handle_event: Setting
IDLE polarity due to ring. Old polarity was 0
Dec 28 00:07:34 DEBUG[3896]: pbx.c:1548
pbx_substitute_variables_helper_full: Function result is '"" <>'
-- Executing NoOp("Zap/1-1", "CALLERID="" <>") in new stack
Dec 28 00:07:36 DEBUG[3896]: chan_zap.c:4907 __zt_exception: Exception
on 14, channel 1
Dec 28 00:07:36 DEBUG[3896]: chan_zap.c:4092 zt_handle_event: Got event
Ring Begin(18) on channel 1 (index 0)

As you can see asterisk does capture part of the callerid (which is 2114019105 in this case) *after* the initial first ring, but misses the rest of it (last 4 digits).

Trying to figure out what the issue was, the first though was that the dtmf send by the device was somehow distorted,
What was need it, was a way to capture what the device was sending and check it.
So i decided to used one of what i call, asterisk's hidden super-weapons, ztmonitor.
ztmonitor is a utility that comes with asterisk and allows you to monitor a Zap (aka Dahdi) channel for signal level and also save a 'raw' image of what comes in or goes out through that interface.
It has proven a valuable tool in the past when we were trying to solve some echo issues a customer had.

The following command can be used to record any given zap channel, mixing input and output streams in a single file

ztmonitor "channel number" -f "file_name.raw"

ztmonitor has also options to capture input and output streams in different files and also get the stream without echo canceling applied.

Once you have captured the data, you can convert them to a wav file for further processing using sox.

sox -r 8000 -s -w -c 1 "file_name.raw" "file_name.wav">

What that gives you is a wav file of both streams ready for further processing and investigation.

First thing to do is to have a visual view of the captured signal.

There are several open source tools but i use Audacity for this task, as it provides several interesting features.

Following is what the captured streams looked like when loaded in Audacity.

By measuring the different parts of the stream the following info was gathered.

After the 1st ring ends, there is a 640ms delay before the dtmf starts.
Dtmf pulses have a length of 84ms and there is inter-digit delay of 120ms.

Things looked "normal",there are a few glitches in the captured stream but nothing too big and the level of noise was not big, so next we had to test if the dtmf pulses were valid.

And the tool of choice for this kind of work is none other than multimon.

Multimon can decode a variety of digital transmission modes commonly found on UHF radio using the soundcard or a wav file as input and DTMF is one of the supported.

It's also pretty straight forward to use

multimon -a DTMF -t wav ./streamtx.raw.wav

would produce an output like that.

multimod (C) 1996/1997 by Tom Sailer HB9JNX/AE4WA
available demodulators: POCSAG512 POCSAG1200 POCSAG2400 EAS AFSK1200 AFSK2400 AFSK2400_2 HAPN4800 FSK9600 DTMF ZVEI SCOPE
Enabled demodulators: DTMF

That proved that the dtmf send by the device was OK, so where is the problem ?

After looking at source code and searching for people with similar problems, it was clear that dtmf callerid in asterisk 1.2 is "partially" working.
What partially means, is that there are basically 3 ways callerid is send.
One is between the gap of the first and second ring, second is by signaling a polarity reversal before the first ring and as i found out in some countries callerid is send before the ring without any signalling (i.e polarity reversal)
Also in the case dtmf is used to send the callerid a start-of-callerid is send before the number in the means of a letter (which is different depending on country)

In Asterisk 1.2 when the callerid is send with dtmf there is only support for the polarity reversal method and not the other two.
The good thing was, that a lot of Indian telcos where using dtmf between the first and second ring so there was a patch available for 1.2 that never made it in the svn, but got incorporated in 1.4 and 1.6.

So i got the patch,removed some of the debug lines and rebuilded chan_zap.

Now i could get the last 8 digits of the callerid the FCT send, but some reason i was missing the initial two...
It looked like the dtmf detection routine was kicking-in a bit too late, missing the first 2 dtmf digits.
After a bit more of head-scratching i remembered there was definition in the zaptel card driver (wctdm.c) that was defining the time the line would take to settle after the ring.

#define DEFAULT_RING_DEBOUNCE 64 /* Ringer Debounce (64 ms) */

Setting this to 32 ms solved the issue of the two first missing digits.

The whole procedure took several days, till we managed to solve the issue, but consider the alternative if this was a closed source system, where we would have to file a bug report to the company supplying the software.
We would probably had to wait for an answer for much longer provided they had enough interest (and resources) to fix the issue.

Sunday, November 1, 2009

Setting up Android SDK on Ubuntu for the Samsung Galaxy

On my quest to get the Samsung Galaxy a new "ROM" to fix the issues mentioned in my last post, i installed the GalaxyHero ROM which is a custom (or "cooked") ROM.
The GalaxyHero does offer solutions to many of the issues i mentioned with the original Galaxy firmware and even the IIE update.
As usual when dealing with new devices, firmware updates of this nature,are a good way to "brick" your device so i took my time Googling, reading and taking notes of the actions required for the update, which among other things give you root access to the phone.

The first problem i encountered after downloading and installing the Android SDK and the Eclipse plugin on my 64bit Ubuntu desktop is that the phone was not recognized by the SDK.
Typing in the SDK tools dir

adb devices

came up with an empty list.

After a bit more search here is what you need to do to get your Samsung Galaxy to connect with adb on Ubuntu and allow debugging on the phone.

On the phone, in Settings/Applications/Development, check the box "USB debugging".

Then add a new udev rule for the phone
Create a file using your favorite text editor in /etc/udev/rules.d/11-android.rules
and add the following line

SUBSYSTEM=="usb_device", SYSFS{idVendor}=="04e8", MODE="0666"

Please note that the USB vendor ID is not the same for Samsung and other Android phones (aka HTC) and most of the documentation i found refers to HTC's id.

Then type this:

sudo chmod a+rx /etc/udev/rules.d/11-android.rules
sudo /etc/init.d/udev restart
./adb kill-server

The above will make the phone "recognizable" by Ubuntu,but still if you try connect with adb it does not work.
The problem is that the adb shipped with the Android SDK up to this writting (Nov 1st 2009) does not work with the Galaxy.

To overcome this you need a patched version of adb
More details and the source to build the adb yourself can be found in the German Galaxy forum

cd to your Android SDK tools dir

rename the original adb

mv adb orig-adb

download,unzip and make executable the patched adb

gunzip adb.gz
chmod +x adb

then start the adb server

./adb start-server

see if the phone is recognized

./adb devices
List of devices attached
I7500c0xVS8PQ4H device

Next i will post a few simple steps to get the GalaxyHero ROM installed even without the adb

Sunday, October 25, 2009

My first week with an Android phone - Samsung Galaxy i7500

It's almost a week since I got my first Android based phone a Samsung Galaxy i7500 and so I decided to write a few things of what has happened these last few days.
But before the “dirty” tech details a few words about the process that got me there.

For a few months now I have been looking for a new phone,partly because my (t)rusty Qtek 9000, Windows Mobile 6.1 phone has started showing its age,partly because I was interested in looking into mobile phone development in the new platforms that have been around for a while.
As you expect the candidates were 2.
iPhone 3G(S) from Apple and an Android based phone with the iPhone having some advantage.

So I started looking around the net to see what is need it to get into programming these 2 platforms and what the dev communities say.
(I had some experience in mobile device programming both in the Microsoft Mobile and Linux, on the Neo Freerunner phone sometime in the past, but not with the new technologies that have come up since.)

To my “surprise” I realized that in order to get into iPhone development you had to be on a Mac platform (at least when I was doing my Googling, things change fast) and you had to have Apple's “blessing” if you wanted to get your application in the Apple store... Hmmm

No one can deny that Apple has some really good products and technologies but after I have “seen the light” with Open Source I really dislike the vendor lock-in.
Don't get me wrong Apple fans, as I said, the products are great but if another company (say Microsoft) had pulled the same “tricks” Apple is pulling on Palm for example, people with pitchforks would be camped outside the Microsoft HQ asking for heads to roll...

So after deciding not to go the Apple way, the next question was... “which Android based phone ?”

A lot more Googling and visits to the local wireless operator shops (I am in Greece) to find the phone.
The result.
Most of the Android phones in the (Greek) market were HTC based and hovering in the 500 Euro range, but what caught my eye was the Samsung i7500.
The AMOLED screen looked fantastic and the specs were great, (although the RAM was a bit on the “low side”) and had all the bits and pieces to use it both as a dev platform and “normal” phone.
The only thing I missed (at that time) was the keyboard, which was a great additions and the main reason I have kept my Qtek for so long.

So I got the phone, and started my journey into the world of Android and Samsung i7500.

The good stuff first...

The AMOLED screen is really great :)
The Android Market (Thank you Apple for the idea ) is really cool. I wondered how we lived without “application stores” so long :)
The design of the phone was great although "plastic" in finish.

And the bad stuff...

Power management with the default firmware sucks, big time...
There is no VPN support (at least with the Cupcake version the phone was shipped)
Tethering is not supported
The 5 MPixel camera has some serious lag issues
The default browser has file uploads disabled for “security reasons”
If you try to sync your Outlook contacts you need...Google (mail,calendar etc)
The control software (Samsung New PC Studio) shipped with the phone runs only on Windows and DOES NOT support firmware updates of the phone (what the Samsung people were thinking ???!!!)

Let's see them in more detail

After charging overnight and started using the phone, with all systems up (wifi,3G,bluetooth,GPS) I noticed that the battery was draining fast. At the beginning I thought it was because it was a new battery and LiIon batteries need a few charging cycles before the “settle” to their peak.

After a couple more days and several charging cycles I decided to shutdown all the wireless and leave only the GSM part of the phone on. Same thing.
Did more Googling, found out that others had the same problem and the problem was the firmware and it had to be updated..
But wait... The Samsung New PC Studio says I have the latest firmware (and does not even recognize the phone when going for an update)
More Googling...
The version of the PC Studio Samsung ships with the phone.... can not updated it.... need newer from the German Samsung side...
Download, install, reboot,phone recognized this time, but no new firmware....Dito again.
More Googling....
Need a 3rd party program and a firmware image people have literally hacked out of the Samsung site.
Done it, and phone works as it should (3 days now with no re-charge)
/me Thanks Samsung for having to spend a day looking for all these.

Next task....

Move my contacts from the Windows Mobile based phone to the Android one...
(Still trying to)
Well it looks that unless you use Google services and upload all your contacts and calendar data there, you are in for the long ride...

Tried to take some photos with the 5 Mpixel camera the phone has...
Disappointed... The camera lag is such that you have to press the shutter button a couple of seconds in advance... Forget those fast moving fotos...

Ok managed to take some pictures, let's try to upload them to Twitter using the web application of TwitXL...
The Upload button of the page was disabled by the buildin browser, Hmmm so I need an application like the iPhone does ? At least they have an 'explanation' that their system does not have a “filesystem” , what about Android ? (Must be the first Linux based filesystem-less system)

Wanted to do a VPN with the office, (free wifi access points are nice, but do you want all our data unencrypted over wifi ? I don't.) no luck either...
Android 1.5 (Capcake) does not have VPN support, 1.6 (Donut) has.
There is no “official” Samsung image with Donut for the moment, but the good people of the dev community have came up with solutions of their own (this is were Open Source makes all the difference)

What's the verdict after all these ?
The phone was not ready for the market... I think Samsung has pushed it out the door, to get into the “Google Wave” with little or no QC
If a non technical inclined person gets his/her hands on this phone they will return it for service in the next few days... not good, unless of course the market you are after are the “techies” like me, that can swift through pages and pages of forums and blogs looking for the little piece of info to fix it...

Android has potential, but still needs a lot of work and refinement if its going to be in “peoples” phones and not just for the techies.

BTW if you are new to the Galaxy i7500 and strangling with the same issues, this link can help (at least it did in my case)

Saturday, September 12, 2009

Installing compact fluorecent lamps - A year later

Last November i decided to remove all incandescent lamps from the house and replace them with low power compact fluorescent one's.
I replaced 35 100 Watt lamps with the Osram EL Dulux 20W.

I chose those because mainly i liked the type of light they produced and decided to pay the extra cost to save some CO2 from the planet and go "green" on lighting.

A compact fluorescent lamp is much more expensive to buy than a standard incandescent one, but you save on the long run from the lower electricity consumption and the *longer* working life its supposed to have.

The Osram El Dulux 20W was rated at 10.000h which would replace 10 normal lamps, according to what is printed on the box.
Assuming an average of 6h of working time for the lamp per day , it should last for approx 1666 days or 4.5 years before it fails.

Well i can report that in less than a year, 8 from the 35 lamps have failed and need it to be replaced.
That's almost 1 in 4 (or close to 25%) loss and sure is not what i had hoped for.

I am not sure why the lamps failed, as they failed in random intervals and random places in the house, with the first staring 3-4 months after it was installed.

I don't believe that anyone from Osram will read this blog post but in the case it does, i have kept most of the failed lamps,as i was looking for a place that would accept them for recycling (yes lamps get recycled and need to be recycled as they contain some "serious" metals and not so good chemicals) and if you want i can send them back for you to test with all expenses paid.

In any case, Osram, you should check your quality as you have a really dissatisfied customer.

Saturday, June 13, 2009

European Commission pushes for software patents via a trusted court

First of all a warning. If part of the following text looks Greek to you,it's because it is. :)
It's part of a message i received from Konstantina Zoehrer regarding the new "initiative" to get software patents in EU, through the backdoor by use of a "trusted court".
For more info check this link.
There is a petitionalso you can sign.

Well it looks like "special interests" follow the good old saying "if you fail once, try again"
So, let's try and stop them one more time.

Greek text follows

Προτρέπουμε τους νομοθέτες μας

* να περάσουν εθνικές νομικές διευκρινίσεις για το δίκαιο ευρεσιτεχνίας για να αποκλείσουν οποιοδήποτε πατέντα λογισμικού,
* να ακυρώσουν όλους τους εγκεκριμένους ισχυρισμούς για πατέντες που μπορεί να παραβιάζονται από λογισμικό το οποίο τρέχει σε προγραμματιζόμενες συσκευές,
* επίσης να αγωνιστούν για τη διάδοση των εν λόγω κανόνων σε Ευρωπαϊκό επίπεδο, συμπεριλαμβανομένης της Ευρωπαϊκής Σύμβασης Διπλωμάτων Ευρεσιτεχνίας.


Διαβάστε περισσότερα εδώ

Tuesday, June 2, 2009

First results on a WiFi security survey

For the past few weeks i am working on a paper for the security of the wifi networks.
Part of my research was to look at the state of security of deployed wifi AP all over Athens.
Athens is a rather large city (4+ million people leave here) and i believe that a sample of that size could be indicative of the general state if wifi all over.

The first results are rather mixed.

The good news is that the number of wifi ap's deployed is really large.
By driving through some of th main streets crossing Athens, and some of the business areas i found more that 10.000 unique MAC's of AP.
Considering that i only covered a very small portion of the Athens metropolitan area and was only driving on main roads, i estimate that the number of deployed AP will be over 200.000 to 250.000 and this is a really a very conservative estimate.

The bad news now.

More than 50% of the AP's surveyed,are using no encryption or WEP and from those using WPA 70% is using the default ESSID's set by the ISP's that provided the unit,making them easy targets for rainbow table attacks.

It looks like that most people trust the settings their ISP provides when they get their adsl IAD, or don't how to change them or simply are unaware of how wifi works and the dangers.
One ISP in particular seems to be the "king of unprotected wifi AP" in Athens as 3/4 of all the AP's with their ESSID is either open or using WEP.

I am still collecting data and working on the paper, but given the rather large number of samples i allready got i don't think the results will change much,but this is something to see.

On the funny side of things, if people do decide to change their ESSID they can be very creative :)

Monday, May 11, 2009

First results and thoughs on WPA security

I have spend the last few days reading about and playing around with the various tools available for cracking WPA and here is what i came to.

Contrary to the "hype" WPA is not cracked, as WEP was.
No "fatal" design flaw has been found that can be exploited to get access to your wifi network.
The *current* and *known* (there is no way to emphasize this more) ways of getting access to WPA protected network is by the "old" way of the dictionary and/or brute force attacks and "rainbow tables".
That been send, the "weak link" in your WPA security is actual your chosen password.
If your password can be easily "guessed" (8 characters passwords with numbers (i.e bith dates) or known words like "/dev/null" :) then you might get into trouble if someone targets you.
Another thing that i realized is that by hiding your ESSID you are actually becoming an easier target to an attack.
This has to do with the fact that your ESSID is actually part of the key and the empty ESSID is on the top 10 of ESSID's people have pre-calculated rainbow tables for.

So to make your WPA wifi more secure do the following

1)Select a unique ESSID, an as attacker could not use a ready-made rainbow table, and would have to recalculate the PMK's and that can take some time, even with the help of Pyrit and some serious hardware.

2) Try to select a random password at least 20 characters long.

3) Switch to AES insted of TKIP

Friday, May 8, 2009

Getting Pyrit to work on my system and other WPA-PSK related rand

Recently i started reading about WPA-PSK security, as a client was using it, in their office, and after some discussion, wanted to see how (in)secure might be.

After some reading it looks like the best way to hack a WPA-PSK based system (for the moment) is to create a rainbow table of possible PMK's (Pairwise Master Key) and then let loose tools like cowpatty and aircrack-ng.
Basically you exchange time for space, as the same PMK can be used on any AP with same SSID.

The problem is, that a PMK requires something in the range of 16.000+ rounds of SHA-1 and this requires some really big computing power.
To give you an idea of the computing power required, my quad core@3Ghz can do about 1.2K PMK's/sec,which is not that bad, but will take weeks to go over a descent word list.

Doing some more search i found that there is a program called Pyrit, that uses the power of the GPU to do some serious PMK crunching. My "vanilla" (not over clocked) Nvidia 8800 GT does 4.800/sec while other, newer Nvidia cards can reach close to 50.000/sec PMK's

UPDATE 17/5/2009
The problem mentioned below is solved in revision 99 of Pyrit
Looks like it was a CUDA 2.2 bug
I am leaving the text for 'historical reasons' but you can safely ignore the fix

So i updated my CUDA drivers and SDK (Pyrit requires CUDA 2,2), got Pyrit from the SVN, build it and run my first benchmark, using only the CPU's.
Things were good, so i moved on to build the Nvidia CUDA module for Pyrit.
The build was ok

stelios@Athena:~/pyrit/pyrit-read-only/cpyrit_cuda$ ./ build
running build
running build_ext
Compiling CUDA module using nvcc 2.2, V0.2.1221...
ptxas info : Compiling entry function 'cuda_pmk_kernel'
ptxas info : Used 42 registers, 32+24 bytes smem, 12 bytes cmem[1]
Building modules...
stelios@Athena:~/pyrit/pyrit-read-only/cpyrit_cuda$ sudo ./
running install
running build
running build_ext
Skipping rebuild of Nvidia CUDA kernel ...
Building modules...
running install_lib
running install_egg_info
Removing /usr/lib/python2.5/site-packages/CPyrit_CUDA-0.2.3.egg-info
Writing /usr/lib/python2.5/site-packages/CPyrit_CUDA-0.2.3.egg-info

but when trying to run the benchmark again i got an error

stelios@Athena:~/pyrit/pyrit-read-only/cpyrit_cuda$ pyrit benchmark
Pyrit 0.2.3 (C) 2008, 2009 Lukas Lueg
This code is distributed under the GNU General Public License v3

The ESSID-blobspace seems to be empty; you should create an ESSID...

Failed to load CUDA-core (CUDA_ERROR_INVALID_IMAGE).
Running benchmark for at least 60 seconds...

CPU-Core (x86_64): 302.43 PMKs/s, 99.41% occupancy
CPU-Core (x86_64): 292.03 PMKs/s, 90.08% occupancy
CPU-Core (x86_64): 300.92 PMKs/s, 87.42% occupancy
CPU-Core (x86_64): 303.17 PMKs/s, 99.17% occupancy

Benchmark done. 1198.55 PMKs/s total.

For some reason the CUDA part was failing to load.

Googling about the error, i found a couple others had the same issue, so it was not just me, doing something wrong.
I emailed the author, but received no reply, so after a day started looking at the code to see where the problem comes from. Open Source rulez :)

It turned out that the module failed to load the CUDA kernel.
Pyrit "converts" the CUDA cubit module to an include file _cpyrit_cudakernel.cubin.h and then uses the CUDA API to load the kernel module.
In my case ,For some reason the _cpyrit_cudakernel.cubin.h seems to have an invalid
cuda kernel image.

So i changed the part that loads the kernel in _cpyrit_cuda.c from the include file

ret = cuModuleLoadData(&self->mod, &__cudakernel_module);


ret = cuModuleLoad(&self->mod, "/your/path/to/cubitfile/_cpyrit_cudakernel.cubin");

(P.S add the correct path to your cubit file)

that loads the cubit file directly.

That got the problem fixed and benchmark worked like a charm

stelios@Athena:~/pyrit/pyrit-read-only/cpyrit_cuda$ pyrit benchmark
Pyrit 0.2.2 (C) 2008, 2009 Lukas Lueg
This code is distributed under the GNU General Public License v3

Running benchmark for at least 60 seconds...

CUDA-Device #1 'GeForce 8800 GT': 4796.11 PMKs/s, 89.75% occupancy
CPU-Core (x86_64): 283.45 PMKs/s, 84.37% occupancy
CPU-Core (x86_64): 298.66 PMKs/s, 96.09% occupancy
CPU-Core (x86_64): 289.44 PMKs/s, 99.15% occupancy

Benchmark done. 5667.66 PMKs/s total.

Haven't looked at why the _cpyrit_cudakernel.cubin.h has a corrupted
kernel, will probably do so, during the weekend and post any patches to fix it.

Friday, April 3, 2009

Cudos to Intersys for their support

Yesterday i received the replacement for my 42" Toshiba LCD TV from Intersys, the local Toshiba distributor.
Although i am kind of disappointed with Toshiba's quality, as this is the second TV from Toshiba with panel problems in 2 years, i am really pleased by the support i received from their local agent.

In both cases not only they exchange the TV but i received a newer model. (This time i got a Toshiba REGZA 42ZV555DG as a replacement for REGZA 42XV505DG i had )
Intersys acknowledging my frustration even offered to replace it with a different brand they distribute and a much larger TV (a 50" Panasonic plasma) but for a number of reasons i decided not to do so.
They have send the new TV to my home and picked up the old one, without having to pay anything, or go through hoops.

In difficult economic times like this, keeping the service and support level to your customers is really difficult, but at the same time its a very good way to keep customer loyalty.
Bad times are not going to last for ever and on when things turn better, i believe all this effort will payback.

Thursday, March 26, 2009

It's been almost a month since my last blog post...
Day to day work and my new pet project TwitXL are taking most of my time these days.
The Athens MediaCamp09 was a nice "distraction" also.
Met some very interesting people, with cool ideas.

I have several "half baked" posts that i need to finish and clear up so i can post them. Hope that i'll get some free time during next week.

Saturday, February 21, 2009

Displaying UTF8 characters from mysql using bash

I was working on a bash script that was using mysql to retrieve utf8 encoded names from a table.
Everything was smooth till i used some non English chars, like Greek and Brazilian and then i started seen a bunch of ? printed instead of the characters i was expecting.
I spend a morning looking around for a solution as i though that this was a BASH issue.
No matter what i tried the result was the same.
Then i added one more switch to the mysql query i was doing, to force mysql to output the result in utf-8, in case it was not doing so (which i was *sure* it was, as the tables were in utf-8 encoding.)


After adding this all my problems were solved...

Mental note : Make sure, that when i am "sure" about something, always test it, just to be sure it works as "expected".

Tuesday, February 3, 2009

GoogleEarth 5 on Ubuntu 8.10

Got GoogleEarth today and tried to install it on my Ubuntu Desktop. Once it was installed running the binary gave me the following error

googleearth-bin: relocation error: /usr/lib32/i686/cmov/ symbol BIO_test_flags, version OPENSSL_0.9.8 not defined in file with link time reference

To solve this just cd to the GoogleEarth dir rename the libcrypto in something else.

stelios@Athena:~$ cd google-earth/
stelios@Athena:~/google-earth$ mv

Saturday, January 31, 2009

Ubuntu one liner - Find out which packages are installed

The following one liner can be useful if you want to see which packages are installed in your Ubuntu machine. (it should work on debian also)

dpkg --get-selections | grep -v deinstall

Monday, January 26, 2009

When getting a tcp packet from USA than a local ISP is faster...

Today i decided to update our desktop linux machines to the latest and greatest Ubuntu version (8.10).
I was using it for sometime on my laptop and decided it would not break anything so it was time for an upgrade.
Since Digital-OPSiS office is in Athens, Greece,we use as a repo to pull updates etc.
I noticed that the download speed was not that great so i switched one of the machines to use the use the US mirror instead... and it took about 40% less time to download the same packages...
Doing a simple ping to the gr and us mirrors cleared things a bit more.

stelios@DIAS-Linux:~$ ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=58 time=197 ms
64 bytes from ( icmp_seq=2 ttl=58 time=199 ms
64 bytes from ( icmp_seq=3 ttl=58 time=201 ms
64 bytes from ( icmp_seq=4 ttl=58 time=197 ms
64 bytes from ( icmp_seq=5 ttl=58 time=194 ms
64 bytes from ( icmp_seq=6 ttl=58 time=202 ms
64 bytes from ( icmp_seq=7 ttl=58 time=204 ms

--- ping statistics ---
7 packets transmitted, 7 received, 0% packet loss, time 5998ms
rtt min/avg/max/mdev = 194.562/199.653/204.150/3.273 ms

stelios@DIAS-Linux:~$ ping
PING ( 56(84) bytes of data.
64 bytes from ( icmp_seq=1 ttl=54 time=90.2 ms
64 bytes from ( icmp_seq=2 ttl=54 time=89.9 ms
64 bytes from ( icmp_seq=3 ttl=54 time=90.3 ms
64 bytes from ( icmp_seq=4 ttl=54 time=89.5 ms
64 bytes from ( icmp_seq=5 ttl=54 time=89.1 ms
64 bytes from ( icmp_seq=6 ttl=54 time=88.6 ms

--- ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 88.453/89.106/90.142/0.816 ms

It looks like it takes 2.5 times more time to reach the local Athens uni, where the mirror is than the US mirror.
I was aware that there are capacity issues at the AIX the Greek providers are using for interconnecting but this is really ridiculous...

Friday, January 23, 2009

Doing a reverse ssh tunnel the embedded way

Having a large number of asterisk pbx installations creates some interesting problems to people who provide support for them.
One of tha major issues is how do you get access to the PBX if it is sitting behind a firewall/nat where you have little on no control and/or has a dynamic ip.
In most cases you could forward the ssh port from the vpn/router to the the asterisk machine but there are several problem that can come up with this way.
1) More and more companies have a strict "no incoming ports open" policy.
2) Having a permanent "unrestricted" access to the pbx equipment and its logs and functions,might make some companies very skeptic.
On the other having to send a tech person to a 2 hour drive to add an extension or make a minor modification to the dialplan is rather expensive and would create a real support nightmare.

After thinking about it and discussing with our clients (and their clients), we came up with the idea of a customer triggered solution that would create some form of secure "tunnel" from the Hermes e-IPBX to the suport center of the company that is providing support for the PBX.

The idea is simple.
If a tech needs to access the PBX remotely, a person in the company calls an extension, enters a password and the the pbx creates a secure tunnel to the server of the support company giving them access.
Once support has finished, the person in the company calls again the same extension shutting down the tunnel, and everything goes back to normal.
No open ports, no "unauthorized" access.

The question then came, to what type of secure tunnel we would use.
The solution had to provide a secure login and also access to web interface (port 80) of the pbx
Since Hermes e-IPBX runs on embedded devices also it had to be something small in size.
With this in mind we started investigating two options.
A PPTP vpn and a reverse ssh tunnel.

We started with the ssh option first, as almost all of the required pieces of software where in place,as Hermes e-IPBX is using dropbear as an ssh client/server.

One very little know fact about the ssh is that on top of providing secure logins to remote hosts, it can create secure tunnels between two points and forward ports between them.
Another even less known fact is that it can create reverse tunnels, where machine A which is behind a firewall/nat can be accessed by machine B without having to change anything in machine's A firewall/nat.
For this to work, machine B must have a publicly accessible ssh server.

It goes like this

Machine A initiated a reverse ssh tunnel connection to machines B ssh serv

# ssh -fNR [bind_address:]port:host:hostport] [user@]hostname

Once this is done machine B initiates a LOCAL connection to the port machine A is using as forward.

# ssh localhost -p port

As example we want to access the target node (name: localhost) from our node (name: The port we want to access is port 22 and it will be accessible from our node at port 2222.
# ssh -fNR 2222:localhost:22
After that, the you are prompted password as usual. After successful login, the command quit but it will remains in background.To access the target node, use our node to access the forwarded port (2222). So let say, if you want to ssh into the machine (because we have forwarded ssh port), we can use this command
# ssh localhost -p 2222

That is all. You now have loged in to the remote machine.

We had this example tested using our Ubuntu desktops but when we tried to implemented to the Hermes e-IPBX a few problems come up.

First, dropbear requires slightly modified commands
so the

# ssh -fNR 2222:localhost:22


# ssh -f - N -R 2222:localhost:22

Second and most important, the moment we had machine's B 22 port open (the one with the publicly accessible ssh server) we received a number of brute force ssh password guessing attacks.
Normally this would not be much of a security problem (unless of course your root password is "sex" or "god" :) but it is both annoying having to switft over large logs with failed attempts and also could work as a potential DOS attack.

The good thing about this ssh brute force attacks is that in most cases are started by "script kiddies" and the only port they scan is 22.
So if you move your ssh server to another higher port the problem pretty much disappears.
Off course as an added protection to your extra-secure-ssh-password :) it would make sense to put the machine with the ssh server on the dmz zone of your network. In the case your extra-secure-ssh-password is not so secure or a security hole is found in your ssh server.

So assuming you move the ssh server to port 3333 the command on machine A should become

# ssh -p 3333 -f - N -R 2222:localhost:22

In a following post i'll show how to merge all the above with the asterisk dialplan and create a neat support feature.

Tuesday, January 20, 2009

The Voltcraft energy Logger 3500 has arrived

The Voltcraft energy Logger 3500 has arrived today and already had the first surprise.
As a test, I connected an extension cord which had my mobile's charger, and a BT handset charger to see how its working.
I unplugged the devices one after the other to see how much the current consumed was dropping and even when i had both of them disconnected Volcraft was registering a 1W power consumption.
I though it was weird,considering that the extension cord is passive, but then i noticed that my extension cord has a switch with a light, to show when its on or off.
Switching off the cord power consumption went to 0...
Wow 1W from the lamp of the extension cord !
Although this is close to the error range of the Voltcraft,that made me wonder how many other devices are"hidden" consumers of electricity around the house or the office.

Thursday, January 15, 2009

Measuring embedded asterisk power consumption

For sometime now i wanted to look at the power consumption of embedded devices running asterisk compared to a "standard" pc (if such a thing exists) and possible extend it also to measure standard PBX's.

With a first look it looks easy enough, but as with all measurements where you want an accurate result the devil is in the details.

First problem that has to be answered is "How do you get an accurate measurement of the power consumption"

There are several low-cost "energy metering devices" ( 15-20 Euros) in the market, in some cases you can find then in supermarkets also, but the accuracy they provide is rather dubious.
From a first look most could not measure power if the power was less than 5W, which is close to what a lot of embedded boards are rated at.
Also is very close to what most PC power supplies would consume in idle mode.

There are some professional equipment out there but spending 400-2500K Euros was not an option.
So after some search and reading i found a device that looks accurate enough and won't make a big dent on the budget.

This was Voltcraft Plus Energy Logger 3500 from

According to the specs it could measure :

Operating voltage 230 V/AC
Performance measurement display 0.1 -3500 W
Performance consumption display 0.000 - 9999 kWh
Display 3-cell with 4 positions each
Tariff range 0,000 - 9,999
Accuracy 5 - 3500 W (± 1% + 1 count)
2 -5 W (± 5% + 1 count)
less than 2W (± 15% + 1 count)

So it provides a +- 5% accuracy from 2-5 W which is rather good for a device that costs 50 Euros.

The other big advantage is that it uses an SD card to store measurements and comes with a piece of software to display the data captured.

I have placed an order for it and i expect it to arrive within next week.
That would give me enough time to figure out which devices and the way to test them.

Any ideas/suggestions/criticism is welcome.

Monday, January 12, 2009

Musopen ! copyright free (public domain) music

I was looking for free (as in beer) music i could use with our HERMES e-IPBX for Music On Hold.
After some Googling i found Musopen.
Musopen is an online music library of copyright free (public domain) music.
As a footnote, If you finally decide to use it please make a donation ( i know from experience that most Open Source project need help in any form)

Sunday, January 11, 2009

World PSTN Tone Database

If you are interested in setting your sip phones or asterisk dialtone (or other tones) to be similar to the country you are in there is an online database with tone settings

It provides the frequency and cadence info plus it displays it with the format to be used by asterisk or sipura/linksys phones.

Friday, January 9, 2009

Updating the NVIDA CUDA driver in Ubuntu

I am running Ubuntu 8.04 (Hardy) 64bit on my desktop and also have the NVIDIA CUDA installed to do some "R&D" work on accelerating asterisk codecs using the NVIDIA GPU's.
Since Ubuntu does not provide a package with the CUDA drivers i had them manually installed.

All things are great till you get a new kernel update from Ubuntu...
After the kernel update the NVIDIA driver (as expected) does not load and your X server switches to a low resolution mode.
Then you need to re-install the NVIDIA CUDA drivers and it can be a bit problematic since the installer requires X server to be shut-down.

So here is what i do

First press


to switch to a terminal window, and login

then type

sudo /etc/init.d/gdm stop

That stops the X server and you can now install the drivers.

Go to the dir where the file is
(I am currently using and write

sudo ./

The installer will warn you about an allready installed driver but you can ignore it.
Then it will not find a pre-build module so it will build it on the fly.

When asked about updating your X server config, answer no, as this will probably mess-it up.

Do a reboot and the NVIDA logo (with the big BETA) should come up and you should now be in the same mode as before the kernel update.

BTW you do not need to update the NVIDIA SDK or the tools.

Sunday, January 4, 2009

Happy New Year

Happy New Year to everyone !
Let's hope that things will get better this year, although the first sings are not that good :(