mlyixi's Blog







Why WifiSLAM can't solve The Apple Problem


I've read a few articles lately on the acquisition of WifiSLAM by Apple. For me it's a bittersweet event that I've been waiting for as I lead my previous startup Qubulus to the gates of the same company, presenting what still is the most beautiful API for indoor positioning based on Wifi radiomapping. As a bunch of people have pointed out -"it could have been You.."


Well, no. I came to realize shortly after the encounter with the Apple indoor tech team that a) radio wasn't their preference when it comes to indoor positioning accuracy b) they were not very good at what they were doing, only somewhat better than the competitors at Google and Sony(Ericsson).

A Swedish company, SenionLabs, was mentioned at the point in time to have conducted a demo at the exact same place as we picked and that there wasn't much of a difference in the user interface (ie the map) between the technologies. Which proves my point b, if you look at a map on a screen with demo scale, detail accuracy and best case a beta version of the tech and not under the hood... Then you are just like someone buying a car based on the color.
SenionLabs and WifiSLAM technologies are rather close in comparison but I'd say that it looks like the WifiSLAM architecture is way better on how to move from macro positioning (GSM/UMTS/etc) to local positioning (Wifi, BT) down to micro positioning by Sensors (Gyro, Acc, Magnetometer, Compass, light etc).
Which proves my a) statement, it was the use of sensors that they wanted, and bought.

Then I started to work on something new 14 months ago. I didn't believe anymore that Wifi would be the solution to ubiquitoussmartphone positioning indoor, basically all players were making it extremely difficult to create a common solution. Apple turned off RSSI-readings for Wifi on iOS. Microsoft didn't even implement it on Windows mobile 7 or 8. The wild west behavior of producers of Android phones created a mess, Samsung created so many new models so fast they didn't implement Android and RSSI-reading in the same way throughout. No time for it. So you could say that the manufacturers killed Wifi as a good basis for indoor positioning, willingly or not.

What I started out with was the knowledge about the sensors in combination with all other components in a smartphone, how they "feel" the presence of RF-modules, battery charger, screen etc. I realized that in the existing structure that all smartphones are built; cramped, packed, overheating, antenna problem, huge screens pulling +50% of all power etcis killing the sensor performance.

So even if you have almost "learning" algorithms (I can talk for days about the misconception of self teaching algorithms...) as WifiSLAM and SenionLabs, you can't get them to perform over time.
Simply put, they are there in a smartphone to react to real time issues and are not in any way kept in a way to perform over time and distance.

Now, 14 months later we are running a full scale startup with product development, Ulocs, with a brilliant team that combines research of sensor performance with user interaction analysis. I won't tell you How we do It and What we Use in terms of Solutions but I will tell you some of our findings.

Finding no 1. From a sensor technology environmentsmartphones sucks. And iPhones suck the most. Because of their design Freakshow. What do You think will happen if you put the antenna as a frame around the whole package with maybe the most advanced screen technology in the middle pulling the battery power faster than anything on the market? Correct, sensor disaster.

Finding no 2. Everyone is using different internal architecture butsensors are on the lowest priority when it comes to internal resources, space and isolation. I bet the design process of a new platform goes like this:

- "Dudes, we need a Bigger screen!"
- "Yeah, let's get a bigger screen! Anyone got a science fiction battery to go with it??"
- "No but we will have when we go into production; let's roll! Throw in 4 CPUs, 5 radios, lot's of memory and a killer design based on a metal frame around the whole thing!"
- "Dude, that's is aaawsome!!"
- "Wait, I think we need sensors too...?"
- "Uuuh, that's right, well put them somewhere, I don't care..."

Finding no 3. Everyone is using filters to read the sensors so the output to services and functionalities based on the sensor real time input is stabilized. First we thought these filters were adaptive and "smart". Shows they are not. So the filters are not adapting to the usage of the other parts of the phones platforms; meaning: shit in - shit out.

I believe it's a kudos to the WifiSLAM team on this too that found this out +2,5 years ago. They started to read the raw data output from the phone and created their filters and analytical algorithms based on that. Still they had to pull the old binary map trick to get any kind of micro accuracy going so that proves their problem with basing some smart stuff on mistreated sensors sending raw data. And now it'sApples problem.

Basically, as long as you design a smartphone platform the way Apple is, you can't get any good accuracy based on sensors without binary maps. To get binary maps you need to have the indoor map of a facility that you want to provide your users with indoor positioning. So Apple have what we can see as pretty bad sensors, lousy architecture for sensors and a need for binary maps to pull it together.
Bad boy, you are going to get punished...

It still means that you have a huge advantage over radio mapping from Wifi. That takes a lot more work on site even if you get better accuracy. And anyone that suggests that installing beacons, well they are focused on office spaces and not public areas, malls, airports or any other commercial space because of the huge maintenance that brings.

If you really want to have great micro positioning based on sensors? Then I suggest you work out your filters and architecture. Who youwanna call? Ulocs.

Btw, based on knowledge about teams that get acquired by Apple when they have superior code over Apple; I'm 90% sure the WifiSLAM will end up in the dungeons under Infinity loop...

setup python scientific computing environment for mac

  • 安装homebrew
  • 安装python,设置环境变量
  • 安装forgtran
  • 下载numpy包,python不能用pip安装,已坏)
  • pip install scipy
  • 安装matplotlib:brew install freetype,libpng; pip install matplotlib.


ACK:快捷键   ,a

系统中必须有ack程序,可以通过brew install ack安装,类似于grep的文本搜索。在vim中调用ack

:Ack [option] 'pattern' [dir]


Hammer:快捷键 ,p


  • .markdown -- gem install redcarpet
  • .textile -- gem install RedCloth
  • .rdoc
  • .org -- gem install org-ruby
  • .creole -- gem install creole
  • .mediawiki -- gem install wikicloth
  • .rst -- easy_install docutils
  • .asciidoc -- brew install asciidoc
  • .pod -- Pod::Simple::HTML comes with Perl >= 5.10. Lower versions should install Pod::Simple from CPAN.
  • .1 -- Requires groff
  • .html
  • .xhtml












粘贴剪切板:,po (vimified中leader是“,”号)



mac, kde4切换到zsh,打造有git提示的美化shell


安装 homebrew:

然后 brew install zsh, vim , macvim etc...


将/usr/local/bin 设为第一行。系统将优行查找/usr/local/bin 中的程序


添加/usr/local/bin/zsh,注册 shell.




git clone git:// ~/.oh-my-zsh


cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc


chsh -s /usr/local/bin/zsh




ln -s /path/to/cloud/oh-my-zsh/ ~/.oh-my-zsh








DEFAULT_USER="your user name"


下载之,然后打开iTerm的首选项,打开profiles中的Colors,load Presets,import "Solarized Dark.itermcolors"

然后在load Presets中选中Solarized Dark。

打字体 补丁:

打开font book


这样,该主题就设置好了,可以重启iTerm看看效果啦。PS: 新版的 iTerm2要配合新版的 theme 文件,而 oh-my-zh 的 theme 文件是旧版(172行),需要更新成115行的 theme 文件才可以正常显示如下特殊字符。





启用配色方案,在最新的kconsole下已经有solarized配色方案了。不知道是不是oh-my-zsh的原因。所以在settings--manag profiles--edit profile--appearance下修改就好。




ios应用总会用到很多库,而苹果只允许使用第三方的静态库.所以,开发者不能制作简单的framework供其它开发者调用. 而且,静态库加入到自己的工程也是个比较麻烦的事情,特别是有些工程没有制作静态库的target,这时还要自己制作静态库目标.


1. 新建自己的工程,git化.

2. 添加子工程: git submodule add git:// 如果对应的工程有submodule, 同步git时可以用--recurcive

3. 如果子工程没有制作静态库目标,制作:

  • 添加静态库目标,将创建以目标名为名的头文件和实现文件,删除实现文件,可以将子工程的头文件包含进目标头文件中.
  • 在public headers folder中设置include/$(TARGET_NAME)实现 target/target.h的引用. 其实不这样引用也可以.
  • 将头文件加入到外部可见或工程可见(私有可见无意), pch文件亦加入.
  • 将实现文件加入到compile source中.
  • 可以考虑将资源文件加入到copy files中
  • 编译

4. 将工程文件拖入父工程

5. 在父工程中加入静态库目标

6. 在link binary with library中加入.a文件

7.在build setting中更改链接标志 -ObjeC -all_load

8. 在header search paths中添加头文件搜索路径: 也许不知道各个环境变量的值很难搞定这块,但是我会告诉你可以在build phases中加入script打印各环境变量的值么(env)? 注意,TARGET_BUILD_DIR=BUILT_PRODUCTS_DIR. 知道了这些变量值,再查看实际路径值,对应起来就是了.

9. 添加子工程的framework

好了,完成以上步骤,一个子工程就添加完毕了. 要添加多个子工程?重复之



确实比较麻烦,不过所幸的是有了cocoapods可以管理子工程. cocoapods是用ruby写的,而在ml上的ruby比较老了,所以先安装版本控制器rvm:


$ \curl -L | bash -s stable --ruby




rvm install 1.9.3 --with-gcc=clang






查了一下,发现只要安装gtk-qt-engine,然后 在系统设置中调整成qt,就好了.


cd ~/.kde4/Autostart

sed -i.tmp s/^screensaver=xscreensaver/screensaver=kscreensaver/




不过通过yaourt安装之发现不能自动补全. 找了下原来是因为python2-rope没有安装.安装后就有了.