EyeHacker is a VR system that spatiotemporally mixes the live scene with the recorded/edited scenes via the measurement of the users’ gaze (the locus of attention). Our system can manipulate visual reality without being noticed by users (i.e., eye hacking), and trigger various kinds of reality confusion.
2260457 H4QWFGBR items 1 0 default asc 825 https://star.rcast.u-tokyo.ac.jp/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3A%22zotpress-2d3d13771024c5beedecdc058c954f6c%22%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22H4QWFGBR%22%2C%22library%22%3A%7B%22id%22%3A2260457%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Ito%20et%20al.%22%2C%22parsedDate%22%3A%222019-07-28%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%20style%3D%5C%22clear%3A%20left%3B%20%5C%22%3E%5Cn%20%20%20%20%3Cdiv%20class%3D%5C%22csl-left-margin%5C%22%20style%3D%5C%22float%3A%20left%3B%20padding-right%3A%200.5em%3B%20text-align%3A%20right%3B%20width%3A%201em%3B%5C%22%3E1.%3C%5C%2Fdiv%3E%3Cdiv%20class%3D%5C%22csl-right-inline%5C%22%20style%3D%5C%22margin%3A%200%20.4em%200%201.5em%3B%5C%22%3EDaichi%20Ito%2C%20Sohei%20Wakisaka%2C%20Atsushi%20Izumihara%2C%20Tomoya%20Yamaguchi%2C%20Atsushi%20Hiyama%2C%20and%20Masahiko%20Inami.%202019.%20EyeHacker%3A%20Gaze-Based%20Automatic%20Reality%20Manipulation.%20In%20%3Ci%3ESIGGRAPH%20%26%23x2019%3B19%20Emerging%20Technologies%3C%5C%2Fi%3E.%20Retrieved%20from%20%3Ca%20href%3D%27https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3305367.3327988%27%3Ehttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3305367.3327988%3C%5C%2Fa%3E%3C%5C%2Fdiv%3E%5Cn%20%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22EyeHacker%3A%20Gaze-Based%20Automatic%20Reality%20Manipulation%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daichi%22%2C%22lastName%22%3A%22Ito%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sohei%22%2C%22lastName%22%3A%22Wakisaka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Atsushi%22%2C%22lastName%22%3A%22Izumihara%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tomoya%22%2C%22lastName%22%3A%22Yamaguchi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Atsushi%22%2C%22lastName%22%3A%22Hiyama%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Masahiko%22%2C%22lastName%22%3A%22Inami%22%7D%5D%2C%22abstractNote%22%3A%22In%20this%20study%2C%20we%20introduce%20EyeHacker%2C%20which%20is%20an%20immersive%20virtual%20reality%20%28VR%29%20system%20that%20spatiotemporally%20mixes%20the%20live%20and%20recorded%5C%2Fedited%20scenes%20based%20on%20the%20measurement%20of%20the%20users%5Cu2019%20gaze.%20This%20system%20updates%20the%20transition%20risk%20in%20real%20time%20by%20utilizing%20the%20gaze%20information%20of%20the%20users%20%28i.e.%2C%20the%20locus%20of%20attention%29%20and%20the%20optical%20flow%20of%20scenes.%20Scene%20transitions%20are%20allowed%20when%20the%20risk%20is%20less%20than%20the%20threshold%2C%20which%20is%20modulated%20by%20the%20head%20movement%20data%20of%20the%20users%20%28i.e.%2C%20the%20faster%20their%20head%20movement%2C%20the%20higher%20will%20be%20the%20threshold%29.%20Using%20this%20algorithm%20and%20experience%20scenario%20prepared%20in%20advance%2C%20visual%20reality%20can%20be%20manipulated%20without%20being%20noticed%20by%20users%20%28i.e.%2C%20eye%20hacking%29.%20For%20example%2C%20consider%20a%20situation%20in%20which%20the%20objects%20around%20the%20users%20perpetually%20dis-%20appear%20and%20appear.%20The%20users%20would%20often%20have%20a%20strange%20feeling%20that%20something%20was%20wrong%20and%2C%20sometimes%2C%20would%20even%20find%20what%20happened%20but%20only%20later%3B%20they%20cannot%20visually%20perceive%20the%20changes%20in%20real%20time.%20Further%2C%20with%20the%20other%20variant%20of%20risk%20algorithms%2C%20the%20system%20can%20implement%20a%20variety%20of%20experience%20scenarios%2C%20resulting%20in%20reality%20confusion.%22%2C%22date%22%3A%222019%5C%2F07%5C%2F28%22%2C%22proceedingsTitle%22%3A%22SIGGRAPH%20%5Cu201919%20Emerging%20Technologies%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22English%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3305367.3327988%22%2C%22collections%22%3A%5B%226TUZBU3Y%22%5D%2C%22dateModified%22%3A%222019-10-31T10%3A08%3A09Z%22%7D%7D%5D%7D
1.
Daichi Ito, Sohei Wakisaka, Atsushi Izumihara, Tomoya Yamaguchi, Atsushi Hiyama, and Masahiko Inami. 2019. EyeHacker: Gaze-Based Automatic Reality Manipulation. In SIGGRAPH ’19 Emerging Technologies. Retrieved from https://doi.org/10.1145/3305367.3327988
2260457 W6QEIH7P items 1 0 default asc 825 https://star.rcast.u-tokyo.ac.jp/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3A%22zotpress-b5f7e234b779138f05f0c1f5d6b5a399%22%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22W6QEIH7P%22%2C%22library%22%3A%7B%22id%22%3A2260457%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22%5Cu4f0a%5Cu85e4%20et%20al.%22%2C%22parsedDate%22%3A%222019-09-11%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%201.35%3B%20%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%20style%3D%5C%22clear%3A%20left%3B%20%5C%22%3E%5Cn%20%20%20%20%3Cdiv%20class%3D%5C%22csl-left-margin%5C%22%20style%3D%5C%22float%3A%20left%3B%20padding-right%3A%200.5em%3B%20text-align%3A%20right%3B%20width%3A%201em%3B%5C%22%3E1.%3C%5C%2Fdiv%3E%3Cdiv%20class%3D%5C%22csl-right-inline%5C%22%20style%3D%5C%22margin%3A%200%20.4em%200%201.5em%3B%5C%22%3E%26%23x4F0A%3B%26%23x85E4%3B%26%23x5927%3B%26%23x667A%3B%2C%20%26%23x9AD8%3B%26%23x539F%3B%26%23x6167%3B%26%23x4E00%3B%2C%20%26%23x5742%3B%26%23x672C%3B%26%23x51DC%3B%2C%20%26%23x6CC9%3B%26%23x539F%3B%26%23x539A%3B%26%23x53F2%3B%2C%20%26%23x8107%3B%26%23x5742%3B%26%23x5D07%3B%26%23x5E73%3B%2C%20%26%23x6A9C%3B%26%23x5C71%3B%26%23x6566%3B%2C%20and%20%26%23x7A32%3B%26%23x898B%3B%26%23x660C%3B%26%23x5F66%3B.%202019.%20%26%23x8996%3B%26%23x7DDA%3B%26%23x60C5%3B%26%23x5831%3B%26%23x306B%3B%26%23x57FA%3B%26%23x3065%3B%26%23x304D%3B%26%23x4E3B%3B%26%23x89B3%3B%26%23x7684%3B%26%23x73FE%3B%26%23x5B9F%3B%26%23x306E%3B%26%23x81EA%3B%26%23x52D5%3B%26%23x64CD%3B%26%23x4F5C%3B%26%23x3092%3B%26%23x884C%3B%26%23x3046%3B%26%23x30B7%3B%26%23x30B9%3B%26%23x30C6%3B%26%23x30E0%3B%26%23x306E%3B%26%23x691C%3B%26%23x8A3C%3B.%20In%20%3Ci%3E%26%23x7B2C%3B24%26%23x56DE%3B%20%26%23x65E5%3B%26%23x672C%3B%26%23x30D0%3B%26%23x30FC%3B%26%23x30C1%3B%26%23x30E3%3B%26%23x30EB%3B%26%23x30EA%3B%26%23x30A2%3B%26%23x30EA%3B%26%23x30C6%3B%26%23x30A3%3B%26%23x5B66%3B%26%23x4F1A%3B%26%23x5927%3B%26%23x4F1A%3B%3C%5C%2Fi%3E.%20Retrieved%20from%20%3Ca%20href%3D%27http%3A%5C%2F%5C%2Fconference.vrsj.org%5C%2Fac2019%5C%2Fprogram%5C%2Fcommon%5C%2Fdoc%5C%2Fpdf%5C%2F4B-04.pdf%27%3Ehttp%3A%5C%2F%5C%2Fconference.vrsj.org%5C%2Fac2019%5C%2Fprogram%5C%2Fcommon%5C%2Fdoc%5C%2Fpdf%5C%2F4B-04.pdf%3C%5C%2Fa%3E%3C%5C%2Fdiv%3E%5Cn%20%20%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22%5Cu8996%5Cu7dda%5Cu60c5%5Cu5831%5Cu306b%5Cu57fa%5Cu3065%5Cu304d%5Cu4e3b%5Cu89b3%5Cu7684%5Cu73fe%5Cu5b9f%5Cu306e%5Cu81ea%5Cu52d5%5Cu64cd%5Cu4f5c%5Cu3092%5Cu884c%5Cu3046%5Cu30b7%5Cu30b9%5Cu30c6%5Cu30e0%5Cu306e%5Cu691c%5Cu8a3c%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%5Cu5927%5Cu667a%22%2C%22lastName%22%3A%22%5Cu4f0a%5Cu85e4%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%5Cu6167%5Cu4e00%22%2C%22lastName%22%3A%22%5Cu9ad8%5Cu539f%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%5Cu51dc%22%2C%22lastName%22%3A%22%5Cu5742%5Cu672c%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%5Cu539a%5Cu53f2%22%2C%22lastName%22%3A%22%5Cu6cc9%5Cu539f%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%5Cu5d07%5Cu5e73%22%2C%22lastName%22%3A%22%5Cu8107%5Cu5742%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%5Cu6566%22%2C%22lastName%22%3A%22%5Cu6a9c%5Cu5c71%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22%5Cu660c%5Cu5f66%22%2C%22lastName%22%3A%22%5Cu7a32%5Cu898b%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22date%22%3A%222019%5C%2F09%5C%2F11%22%2C%22proceedingsTitle%22%3A%22%5Cu7b2c24%5Cu56de%20%5Cu65e5%5Cu672c%5Cu30d0%5Cu30fc%5Cu30c1%5Cu30e3%5Cu30eb%5Cu30ea%5Cu30a2%5Cu30ea%5Cu30c6%5Cu30a3%5Cu5b66%5Cu4f1a%5Cu5927%5Cu4f1a%22%2C%22conferenceName%22%3A%22%22%2C%22language%22%3A%22%5Cu65e5%5Cu672c%5Cu8a9e%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Fconference.vrsj.org%5C%2Fac2019%5C%2Fprogram%5C%2Fcommon%5C%2Fdoc%5C%2Fpdf%5C%2F4B-04.pdf%22%2C%22collections%22%3A%5B%226TUZBU3Y%22%5D%2C%22dateModified%22%3A%222019-10-31T10%3A04%3A41Z%22%7D%7D%5D%7D
1.
伊藤大智, 高原慧一, 坂本凜, 泉原厚史, 脇坂崇平, 檜山敦, and 稲見昌彦. 2019. 視線情報に基づき主観的現実の自動操作を行うシステムの検証. In 第24回 日本バーチャルリアリティ学会大会. Retrieved from http://conference.vrsj.org/ac2019/program/common/doc/pdf/4B-04.pdf