購買設(shè)計請充值后下載,,資源目錄下的文件所見即所得,都可以點開預(yù)覽,,資料完整,充值下載可得到資源目錄里的所有文件。。。【注】:dwg后綴為CAD圖紙,doc,docx為WORD文檔,原稿無水印,可編輯。。。具體請見文件預(yù)覽,有不明白之處,可咨詢QQ:12401814
工 序 目 錄
產(chǎn) 品 型 號
QAI 9-4
共 頁
零 組 件 號
YB458-71
第 頁
工
序
號
工 序 名 稱
設(shè) 備
工序卡片數(shù)
附 注
5
備料
1
10
車
C620
1
15
車
C620
1
20
粗銑
6H11
1
25
鏜孔
C620
1
30
車外圓
C620
1
35
銑外形
6H11
1
40
車外圓
C620
1
45
攻螺紋
CH12A
1
50
銑圓弧
6H11
1
55
修銼
鉗工臺
1
60
研磨孔
研磨頭
1
65
檢驗
檢驗臺
1
70
磨
3153
1
75
車
C620
1
80
去毛刺
鉗工臺
1
85
研磨孔
研磨頭
1
90
檢驗
檢驗臺
1
95
鈍化
1
100
檢驗
檢驗臺
1
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI 9-4
備料
5
設(shè) 備
鋸床
定 位
夾 緊
共 頁
第 頁
備料 50*310.5
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI 9-4
車
10
設(shè) 備
C620
定 位
夾 緊
共1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
三爪卡盤
2
車φ40外圓柱面
車刀
3
鉆φ24的孔
鉆頭
4
φ40右側(cè)倒45°圓角
圓角刀
5
φ24孔右側(cè)倒45°圓角
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
車
15
設(shè) 備
C620
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
三爪卡盤
卡規(guī)
2
車φ37右端面
車刀
3
車φ37外圓柱面
4
車φ47外圓柱面
5
車φ47右端面
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
粗銑
20
設(shè) 備
6H11
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
銑床夾具
2
轉(zhuǎn)盤
3
粗銑φ37外圓柱面
靠模銑刀
4
粗銑距中心14mm的平面
5
粗銑過渡圓弧R7
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI-94
鏜孔
25
設(shè) 備
C620
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
軟三爪
2
專用夾具
3
鏜孔加工
鏜刀
4
切槽加工
切槽刀
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
車外圓
30
設(shè) 備
C620
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
車床心軸
2
尖邊倒圓R0.2-0.3
圓角刀
3
車φ36.3外圓
車刀
4
車距左端面23mm的平面
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
銑外形
35
設(shè) 備
6H11
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
心軸
卡規(guī)
2
銑φ36.5的圓柱面
靠模銑刀
3
銑距中心13mm的上下兩個平面
4
銑過渡圓弧R3
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI-94
車外圓
40
設(shè) 備
C620
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
心軸
卡規(guī)
2
專用夾具
3
車φ37.7的外圓柱面
車刀
4
車2x0.6的退刀槽
5
車0.5x45°的倒角
6
右端面倒角
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
攻螺紋
45
設(shè) 備
CH12A
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
心軸
2
去毛刺
專用夾具
絲錐M4-2
塞規(guī)M4x0.7-2
3
螺紋倒角90°至底
鉆頭
塞規(guī)
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
銑圓弧
50
設(shè) 備
6H11
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
心軸
2
銑R6.5的圓弧
銑刀
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
修銼
55
設(shè) 備
鉗工臺
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
修R5
銼刀
2
去外表面毛刺
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
研磨孔
60
設(shè) 備
研磨頭
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
三爪卡盤
2
研磨φ25.99的孔
研磨鉆
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
檢驗
65
設(shè) 備
檢驗臺
定 位
夾 緊
共 頁
第 頁
φ
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
檢驗2-M4x2螺紋
塞規(guī)
2
檢查φ37.5外圓
游標卡尺
3
檢驗φ25.99內(nèi)孔
游標卡尺
4
檢驗槽寬7
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
磨
70
設(shè) 備
3153
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
專用夾具
2
磨φ36.3外圓柱面
砂輪
3
磨螺紋孔右端平面
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
車
75
設(shè) 備
C620
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
車倒角2.5x30°
專用夾具
車刀
2
拋光R0.5
砂紙
3
去磨后毛刺
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
去毛刺
80
設(shè) 備
鉗工臺
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
去外表面上毛刺
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
研磨孔
85
設(shè) 備
研磨頭
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
三爪卡盤
2
研磨φ26的中心孔
研磨鉆
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
檢驗
90
設(shè) 備
檢驗臺
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
檢查φ37.7
游標卡尺
2
檢查φ26
游標卡尺
3
檢驗7.5
游標卡尺
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
鈍化
95
設(shè) 備
定 位
夾 緊
共 1 頁
第 1 頁
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
1
將零件鈍化.
工 序 卡 片
零 件 名 稱
材 料
硬 度
工序名稱
工 序 號
外筒襯套
QAI9-4
檢驗
100
設(shè) 備
檢驗臺
定 位
夾 緊
共 1 頁
第 1 頁
與相同編號的外筒ZL10-30-01成套交庫
序 號
加 工 要 求 說 明
夾 具
刀 具
量 具
工 序 目 錄
產(chǎn) 品 型 號
共 1 頁
零 組 件 號
第 1 頁
序號
工藝裝備名稱
夾具圖號
刀具圖號
量具圖號
模具圖號
工序號
備注
1
鉆頭
22
10
2
塞規(guī)
24D6
10
3
卡規(guī)
47.5d7
15
4
量環(huán)
32.5-Ⅲ
25 65
5
卡規(guī)
32.5d5
30
6
靠模銑刀
B-14x16
20
7
靠模銑刀
B-6x15xR0.3
35
8
卡規(guī)
36.5d7-B
40
9
鉆頭
L-3.3
45
10
絲錐
M4-2
45
11
塞規(guī)
M4x0.7
45
12
塞規(guī)
M4x0.7-2
45 65
13
錐心棒
D=26
85 90
14
量環(huán)
26-Ⅲ
85 90
15
量環(huán)
25.96-Ⅲ
25
16
卡規(guī)
40d6
10
17
量環(huán)
25.99-Ⅲ
60 65
18
錐心棒
25.97-Ⅱ
70
畢業(yè)(設(shè)計)論文
開 題 報 告
系 別 機電工程系
專 業(yè) 機械設(shè)計制造及其自動化
班 級 161002
學(xué)生姓名 張宇航
學(xué) 號 103329
指導(dǎo)教師 鄧修瑾
報告日期 2014/3/8
畢業(yè)(設(shè)計)論文開題報告表
論文題目
外筒襯套工藝及車床心軸銑床靠模夾具設(shè)計
學(xué)生姓名
張宇航
學(xué) 號
103329
指導(dǎo)教師
鄧修瑾
題目來源(劃√)
科研□
生產(chǎn)t
實驗室□
專題研究□
論文類型(劃√)
設(shè)計□
論文t
其 他 □
1、 選題的意義
外筒襯套零件在產(chǎn)品中一端與液壓助力器外筒組件連接,中間孔與活塞外圓配合,起到一端的支撐作用。因此在機器中有很重要的作用,在生產(chǎn)中會大量生產(chǎn)
所以完成這次選題有以下幾點意義:
1.能使我了解外筒襯套的特點和工作原理.
2.能幫我掌握外筒襯套的工藝過程令設(shè)計出來的零件符合精度及表面粗糙度等各方面工藝要求。
3.通過對所用夾具進行合理設(shè)計來滿足工藝要求,從而將自己的理論知識與實際相結(jié)合。
4.通過此次選題可以提高學(xué)生的自主設(shè)計能力進一步滿足企業(yè)對畢業(yè)生的能力要求。
二、基本內(nèi)容及重點
1.零件的的分析(1)零件的作用(2)零件的工藝分析
2.工藝規(guī)程設(shè)計(1)確定毛坯制造形式(2)定位基準的選擇(3)擬定工藝路線(4)選擇加工設(shè)備及刀具夾具量具(5)機械加工余量工序尺寸及毛坯尺寸的確定(6)確定切削用量及基本時間
3.專用夾具設(shè)計(1)機床夾具概述(2)定位基準選擇(3)切削力和卡緊力計算(4)問題的提出(5)夾具設(shè)計(6)夾具設(shè)計中的特點(7)夾具分類(8)夾具設(shè)計技的發(fā)展(9)夾具的基礎(chǔ)件(10)設(shè)計方法和步驟
4.說明書
三、預(yù)期達到的成果
設(shè)計的外筒襯套工藝過程合理,設(shè)計出來的零件工藝規(guī)程滿足圖紙實際尺寸,表面粗糙度,加工精度等各方面要求。夾具設(shè)計合理符合工藝要求。
四、存在的問題及擬采取的解決措施
存在問題:
1.對利用CAD軟件制圖不熟練。
2.對零件圖的理解不夠充分。
3.對零件加工工藝過程方面知識理解不夠深刻
解決方法:
1. 認真復(fù)習(xí)對CAD軟件的運用。
2. 關(guān)于對零件圖不理解的地方請教老師和同學(xué)。
3. 認真學(xué)習(xí)零件加工工藝過程反面知識彌補自己的不足。
五、進度安排
1、 分析并繪制零件圖 1周
2、 繪制毛坯圖 1周
3、 設(shè)計工藝路線及編制工藝規(guī)程 4周
4、 設(shè)計工藝裝備 3周
5、 編寫說明書(論文) 2周
六、參考文獻和書目
[1] 王先逵編著.機械制造工藝學(xué)[M].北京:清華大學(xué)出版社,1989
[2] 鄧文英、宋力宏.金屬工藝學(xué) 第五版 北京:高等教育出版社.2008.4
[3] 趙志修主編.機械制造工藝學(xué)[M].北京機械工業(yè)出版社,1985
[4] 肖繼德.機床夾具設(shè)計[M].北京:北京機械工業(yè)出版社,2005
[5] 關(guān)慧貞、馮辛安.機械制造裝備設(shè)計(第三版).北京:機械工業(yè)出版社,2009
[6] 孫麗媛.機械制造工藝及專用夾具設(shè)計指導(dǎo)[M].北京:冶金工業(yè)出版社,2007
[7] 楊叔子.機械加工工藝師手冊[M].北京:機械工業(yè)出版社,2001
[8] 朱耀祥,蒲林祥.現(xiàn)代夾具手冊[M].北京:機械工業(yè)出版社,2010
[9] 吳宗澤,高志.機械設(shè)計(第二版).北京:高等教育出版社.2009
導(dǎo)師意見
指導(dǎo)教師簽字:
年 月 日
系意見
系主任簽字:
年 月 日
注:內(nèi)容用小四,宋體
Robot companion localization at home and in the office
Arnoud Visser J¨urgen Sturm Frans Groen
Intelligent Autonomous Systems, Universiteit van Amsterdam
http://www.science.uva.nl/research/ias/
Abstract
The abilities of mobile robots depend greatly on the performance of basic skills such as
vision and localization. Although great progress has been made to explore and map extensive
public areas with large holonomic robots on wheels, less attention is paid on the localization
of a small robot companion in a confined environment as a room in office or at home. In
this article, a localization algorithm for the popular Sony entertainment robot Aibo inside a
room is worked out. This algorithm can provide localization information based on the natural
appearance of the walls of the room. The algorithm starts making a scan of the surroundings by
turning the head and the body of the robot on a certain spot. The robot learns the appearance
of the surroundings at that spot by storing color transitions at different angles in a panoramic
index. The stored panoramic appearance is used to determine the orientation (including a
confidence value) relative to the learned spot for other points in the room. When multiple
spots are learned, an absolute position estimate can be made. The applicability of this kind of
localization is demonstrated in two environments: at home and in an office.
1 Introduction
1.1 Context
Humans orientate easily in their natural environments. To be able to interact with humans, mobile
robots also need to know where they are. Robot localization is therefore an important basic skill
of a mobile robot, as a robot companion like the Aibo. Yet, the Sony entertainment software
contained no localization software until the latest release1. Still, many other applications for a
robot companion - like collecting a news paper from the front door - strongly depend on fast,
accurate and robust position estimates. As long as the localization of a walking robot, like the
Aibo, is based on odometry after sparse observations, no robust and accurate position estimates
can be expected.
Most of the localization research with the Aibo has concentrated on the RoboCup. At the
RoboCup2 artificial landmarks as colored flags, goals and field lines can be used to achieve localization
accuracies below six centimeters [6, 8].
The price that these RoboCup approaches pay is their total dependency on artificial landmarks
of known shape, positions and color. Most algorithms even require manual calibration of the actual
colors and lighting conditions used on a field and still are quite susceptible for disturbances around
the field, as for instance produced by brightly colored clothes in the audience.
The interest of the RoboCup community in more general solutions has been (and still is) growing
over the past few years. The almost-SLAM challenge3 of the 4-Legged league is a good example of
the state-of-the-art in this community. For this challenge additional landmarks with bright colors
are placed around the borders on a RoboCup field. The robots get one minute to walk around and
explore the field. Then, the normal beacons and goals are covered up or removed, and the robot
must then move to a series of five points on the field, using the information learnt during the first
1Aibo Mind 3 remembers the direction of its station and toys relative to its current orientation
2RoboCup Four Legged League homepage, last accessed in May 2006, http://www.tzi.de/4legged
3Details about the Simultaneous Localization and Mapping challenge can be found at http://www.tzi.de/
4legged/pub/Website/Downloads/Challenges2005.pdf
1
minute. The winner of this challenge [6] reached the five points by using mainly the information of
the field lines. The additional landmarks were only used to break the symmetry on the soccer field.
A more ambitious challenge is formulated in the newly founded RoboCup @ Home league4. In
this challenge the robot has to safely navigate toward objects in the living room environment. The
robot gets 5 minutes to learn the environment. After the learning phase, the robot has to visit 4
distinct places/objects in the scenario, at least 4 meters away from each other, within 5 minutes.
1.2 Related Work
Many researchers have worked on the SLAM problem in general, for instance on panoramic images
[1, 2, 4, 5]. These approaches are inspiring, but only partially transferable to the 4-Legged league.
The Aibo is not equipped with an omni-directional high-quality camera. The camera in the nose
has only a horizontal opening angle of 56.9 degrees and a resolution of 416 x 320 pixels. Further,
the horizon in the images is not a constant, but depends on the movements of the head and legs of
the walking robot. So each image is taken from a slightly different perspective, and the path of the
camera center is only in first approximation a circle. Further, the images are taken while the head
is moving. When moving at full speed, this can give a difference of 5.4 degrees between the top and
the bottom of the image. So the image seems to be tilted as a function of the turning speed of the
head. Still, the location of the horizon can be calculated by solving the kinematic equations of the
robot. To process the images, a 576 Mhz processor is available in the Aibo, which means that only
simple image processing algorithms are applicable. In practice, the image is analyzed by following
scan-lines with a direction relative the calculated horizon. In our approach, multiple sectors above
the horizon are analyzed, with in each sector multiple scan-lines in the vertical direction. One of
the general approaches [3] divides the image in multiple sectors, but this image is omni-directional
and the sector is analyzed on the average color of the sector. Our method analysis each sector on
a different characteristic feature: the frequency of colortransitions.
2 Approach
The main idea is quite intuitive: we would like the robot to generate and store a 360o circular
panorama image of its environment while it is in the learning phase. After that, it should align
each new image with the stored panorama, and from that the robot should be able to derive its
relative orientation (in the localization phase). This alignment is not trivial because the new image
can be translated, rotated, stretched and perspectively distorted when the robot does not stand at
the point where the panorama was originally learned [11].
Of course, the Aibo is not able (at least not in real-time) to compute this alignment on fullresolution
images. Therefore a reduced feature space is designed so that the computations become
tractable5 on an Aibo. So, a reduced circular 360o panorama model of the environment is learned.
Figure 1 gives a quick overview of the algorithm’s main components.
The Aibo performs a calibration phase before the actual learning can start. In this phase the
Aibo first decides on a suitable camera setting (i.e. camera gain and the shutter setting) based
on the dynamic range of brightness in the autoshutter step. Then it collects color pixels by
turning its head for a while and finally clusters these into 10 most important color classes in the
color clustering step using a standard implementation of the Expectation-Maximization algorithm
assuming a Gaussian mixture model [9]. The result of the calibration phase is an automatically
generated lookup-table that maps every YCbCr color onto one of the 10 color classes and can
therefore be used to segment incoming images into its characteristic color patches (see figure 2(a)).
These initialization steps are worked out in more detail in [10].
4RoboCup @ Home League homepage, last accessed in May 2006, http://www.ai.rug.nl/robocupathome/
5Our algorithm consumes per image frame approximately 16 milliseconds, therefore we can easily process images
at the full Aibo frame rate (30fps).
Figure 1: Architecture of our algorithm
(a) Unsupervised learned color segmentation.
(b) Sectors and frequent color transitions
visualized.
Figure 2: Image processing: from the raw image to sector representation. This conversion consumes
approximately 6 milliseconds/frame on a Sony Aibo ERS7.
2.1 Sector signature correlation
Every incoming image is now divided into its corresponding sectors6. The sectors are located above
the calculated horizon, which is generated by solving the kinematics of the robot. Using the lookup
table from the unsupervised learned color clustering, we can compute the sector features by counting
per sector the transition frequencies between each two color classes in vertical direction. This yields
the histograms of 10x10 transition frequencies per sector, which we subsequently discretize into 5
logarithmically scaled bins. In figure 2(b) we displayed the most frequent color transitions for each
sector. Some sectors have multiple color transitions in the most frequent bin, other sectors have a
single or no dominant color transition. This is only visualization; not only the most frequent color
transitions, but the frequency of all 100 color transitions are used as characteristic feature of the
sector.
In the learning phase we estimate all these 80x(10x10) distributions7 by turning the head and
body of the robot. We define a single distribution for a currently perceived sector by
Pcurrent (i, j, bin) =
_
1 discretize (freq (i, j)) = bin
0 otherwise
(1)
where i, j are indices of the color classes and bin one of the five frequency bins. Each sector is
seen multiple times and the many frequency count samples are combined into a distribution learned
680 sectors corresponding to 360o; with an opening angle of the Aibo camera of approx. 50o, this yields between
10 and 12 sectors per image (depending on the head pan/tilt)
7When we use 16bit integers, a complete panorama model can be described by (80 sectors)x(10 colors x 10
colors)x(5 bins)x(2 byte) = 80 KB of memory.
for that sector by the equation:
Plearned (i, j, bin) = Pcountsector (i, j, bin)
bin2frequencyBins
countsector (i, j, bin)
(2)
After the learning phase we can simply multiply the current and the learned distribution to get
the correlation between a currently perceived and a learned sector:
Corr(Pcurrent, Plearned) =
Y
i,j2colorClasses,
bin2frequencyBins
Plearned (i, j, bin) ·Pcurrent (i, j, bin) (3)
2.2 Alignment
After all the correlations between the stored panorama and the new image signatures were evaluated,
we would like to get an alignment between the stored and seen sectors so that the overall likelihood
of the alignment becomes maximal. In other words, we want to find a diagonal path with the
minimal cost through the correlation matrix. This minimal path is indicated as green dots in figure
3. The path is extended to a green line for the sectors that are not visible in the latest perceived
image.
We consider the fitted path to be the true alignment and extract the rotational estimate 'robot
from the offset from its center pixel to the diagonal (_sectors):
?'robot =
360_
80
_sectors (4)
This rotational estimate is the difference between the solid green line and the dashed white line
in figure 3, indicated by the orange halter. Further, we try to estimate the noise by fitting again a
path through the correlation matrix far away from the best-fitted path.
SNR =
P
(x,y)2minimumPath
Corr(x, y)
P
(x,y)2noisePath
Corr(x, y)
(5)
The noise path is indicated in figure 3 with red dots.
(a) Robot standing on the trained spot (matching
line is just the diagonal)
(b) Robot turned right by 45 degrees (matching
line displaced to the left)
F igure 3: Visualization of the alignment step while the robot is scanning with its head. The
green solid line marks the minimum path (assumed true alignment) while the red line marks the
second-minimal path (assumed peak noise). The white dashed line represents the diagonal, while
the orange halter illustrates the distance between the found alignment and the center diagonal
(_sectors).
2.3 Position Estimation with Panoramic Localization
The algorithm described in the previous section can be used to get a robust bearing estimate
together with a confidence value for a single trained spot. As we finally want to use this algorithm
to obtain full localization we extended the approach to support multiple training spots. The
main idea is that the robot determines to which amount its current position resembles with the
previously learned spots and then uses interpolation to estimate its exact position. As we think
that this approach could also be useful for the RoboCup @ Home league (where robot localization
in complex environments like kitchens and living rooms is required) it could become possible that
we finally want to store a comprehensive panorama model library containing dozens of previously
trained spots (for an overview see [1]).
However, due to the computation time of the feature space conversion and panorama matching,
per frame only a single training spot and its corresponding panorama model can be selected.
Therefore, the robot cycles through the learned training spots one-by-one. Every panorama model
is associated with a gradually changed confidence value representing a sliding average on the confidence
values we get from the per-image matching.
After training, the robot memorizes a given spot by storing the confidence values received from
the training spots. By comparing a new confidence value with its stored reference, it is easy to
deduce whether the robot stands closer or farther from the imprinted target spot.
We assume that the imprinted target spot is located somewhere between the training spots.
Then, to compute the final position estimate, we simply weight each training spot with its normalized
corresponding confidence value:
positionrobot =
X
i
positioni
Pconfidencei
j confidencej
(6)
This should yield zero when the robot is assumed to stand at the target spot or a translation
estimate towards the robot’s position when the confidence values are not in balance anymore.
To prove the validity of this idea, we trained the robot on four spots on regular 4-Legged field
in our robolab. The spots were located along the axes approximately 1m away from the center.
As target spot, we simply chose the center of the field. The training itself was performed fully
autonomously by the Aibo and took less than 10 minutes. After training was complete, the Aibo
walked back to the center of the field. We recorded the found position and kidnapped the robot to
an arbitrary position around the field and let it walk back again.
Please be aware that our approach for multi-spot localization is at this moment rather primitive
and has to be only understood as a proof-of-concept. In the end, the panoramic localization data
from vision should of course be processed by a more sophisticated localization algorithm, like a
Kalman or particle filter (last not least to incorporate movement data from the robot).
3 Results
3.1 Environments
We selected four different environments to test our algorithm under a variety of circumstances. The
first two experiments were conducted at home and in an office environment8 to measure performance
under real-world circumstances. The experiments were performed on a cloudy morning, sunny
afternoon and late in the evening. Furthermore, we conducted exhaustive tests in our laboratory.
Even more challenging, we took an Aibo outdoors (see [7]).
3.2 Measured results
Figure 4(a) illustrates the results of a rotational test in a normal living room. As the error in the
rotation estimates ranges between -4.5 and +4.5 degrees, we may assume an error in alignment of
a single sector9; moreover, the size of the confidence interval can be translated into maximal two
sectors, which corresponds to the maximal angular resolution of our approach.
8XX office, DECIS lab, Delft
9full circle of 3600 divided by 80 sectors
(a) Rotational test in natural environment (living
room, sunny afternoon)
(b) Translational test in natural environment (child’s
room, late in the evening)
Figure 4: Typical orientation estimation results of experiments conducted at home. In the rotational
experiment on the left the robot is rotated over 90 degrees on the same spot, and every 5 degrees its
orientation is estimated. The robot is able to find its true orientation with an error estimate equal
to one sector of 4.5 degrees. The translational test on the right is performed in a child’s room. The
robot is translated over a straight line of 1.5 meter, which covers the major part of the free space
in this room. The robot is able to maintain a good estimate of its orientation; although the error
estimate increases away from the location where the appearance of the surroundings was learned.
Figure 4(b) shows the effects of a translational dislocation in a child’s room. The robot was
moved onto a straight line back and forth through the room (via the trained spot somewhere in the
middle). The robot is able to estimate its orientation quite well on this line. The discrepancy with
the true orientation is between +12.1 and -8.6 degrees, close to the walls. This is also reflected in
the computed confidence interval, which grows steadily when the robot is moved away from the
trained spot. The results are quite impressive for the relatively big movements in a small room and
the resulting significant perspective changes in that room.
Figure 5(a) also stems from a translational test (cloudy morning) which has been conducted in
an office environment. The free space in this office is much larger than at home. The robot was
moved along a 14m long straight line to the left and right and its orientation was estimated. Note
the error estimate stays low at the right side of this plot. This is an artifact which nicely reflects
the repetition of similarly looking working islands in the office.
In both translational tests it can be seen intuitively that the rotation estimates are within
acceptable range. This can also be shown quantitatively (see figure 5(b)): both the orientation
error and the confidence interval increase slowly and in a graceful way when the robot is moved
away from the training spot.
Finally, figure 6 shows the result of the experiment to estimate the absolute position with multiple
learned spots. It can be seen that the localization is not as accurate as traditional approaches,
but can still be useful for some applications (bearing in mind that no artificial landmarks are required).
We recorded repeatedly a derivation to the upper right that we think can be explained by
the fact that different learning spots don’t produce equally strong confidence values; we believe to
be able to correct for that by means of confidence value normalization in the near future.
4 Conclusion
Although at first sight the algorithm seems to rely on specific texture features of the surrounding
surfaces, in practice no dependency could be found. This can be explained by two reasons: firstly, as
the (vertical) position of a color transition is not used anyway, the algorithm is quite robust against
(vertical) scaling. Secondly, as the algorithm aligns on many color transitions in the background
(typically more than a hundred in the same sector), the few color transitions produced by objects
in the foreground (like beacons and spectators) have a minor impact on the match (because their
sizes relative to the background are comparatively small).
The lack of an accurate absolute position estimates seems to be a clear drawback with respect to
the other methods, but bearing information alone can already be very useful for certain applications.
(a) Translational test i