]> pere.pagekite.me Git - homepage.git/blob - blog/archive/2014/04/04.rss
1561dbb3741dd5087f49f968d485b2fae867d8e6
[homepage.git] / blog / archive / 2014 / 04 / 04.rss
1 <?xml version="1.0" encoding="ISO-8859-1"?>
2 <rss version='2.0' xmlns:lj='http://www.livejournal.org/rss/lj/1.0/'>
3 <channel>
4 <title>Petter Reinholdtsen - Entries from April 2014</title>
5 <description>Entries from April 2014</description>
6 <link>http://people.skolelinux.org/pere/blog/</link>
7
8
9 <item>
10 <title>S3QL, a locally mounted cloud file system - nice free software</title>
11 <link>http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html</link>
12 <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/S3QL__a_locally_mounted_cloud_file_system___nice_free_software.html</guid>
13 <pubDate>Wed, 9 Apr 2014 11:30:00 +0200</pubDate>
14 <description>&lt;p&gt;For a while now, I have been looking for a sensible offsite backup
15 solution for use at home. My requirements are simple, it must be
16 cheap and locally encrypted (in other words, I keep the encryption
17 keys, the storage provider do not have access to my private files).
18 One idea me and my friends had many years ago, before the cloud
19 storage providers showed up, was to use Google mail as storage,
20 writing a Linux block device storing blocks as emails in the mail
21 service provided by Google, and thus get heaps of free space. On top
22 of this one can add encryption, RAID and volume management to have
23 lots of (fairly slow, I admit that) cheap and encrypted storage. But
24 I never found time to implement such system. But the last few weeks I
25 have looked at a system called
26 &lt;a href=&quot;https://bitbucket.org/nikratio/s3ql/&quot;&gt;S3QL&lt;/a&gt;, a locally
27 mounted network backed file system with the features I need.&lt;/p&gt;
28
29 &lt;p&gt;S3QL is a fuse file system with a local cache and cloud storage,
30 handling several different storage providers, any with Amazon S3,
31 Google Drive or OpenStack API. There are heaps of such storage
32 providers. S3QL can also use a local directory as storage, which
33 combined with sshfs allow for file storage on any ssh server. S3QL
34 include support for encryption, compression, de-duplication, snapshots
35 and immutable file systems, allowing me to mount the remote storage as
36 a local mount point, look at and use the files as if they were local,
37 while the content is stored in the cloud as well. This allow me to
38 have a backup that should survive fire. The file system can not be
39 shared between several machines at the same time, as only one can
40 mount it at the time, but any machine with the encryption key and
41 access to the storage service can mount it if it is unmounted.&lt;/p&gt;
42
43 &lt;p&gt;It is simple to use. I&#39;m using it on Debian Wheezy, where the
44 package is included already. So to get started, run &lt;tt&gt;apt-get
45 install s3ql&lt;/tt&gt;. Next, pick a storage provider. I ended up picking
46 Greenqloud, after reading their nice recipe on
47 &lt;a href=&quot;https://greenqloud.zendesk.com/entries/44611757-How-To-Use-S3QL-to-mount-a-StorageQloud-bucket-on-Debian-Wheezy&quot;&gt;how
48 to use S3QL with their Amazon S3 service&lt;/a&gt;, because I trust the laws
49 in Iceland more than those in USA when it come to keeping my personal
50 data safe and private, and thus would rather spend money on a company
51 in Iceland. Another nice recipe is available from the article
52 &lt;a href=&quot;http://www.admin-magazine.com/HPC/Articles/HPC-Cloud-Storage&quot;&gt;S3QL
53 Filesystem for HPC Storage&lt;/a&gt; by Jeff Layton in the HPC section of
54 Admin magazine. When the provider is picked, figure out how to get
55 the API key needed to connect to the storage API. With Greencloud,
56 the key did not show up until I had added payment details to my
57 account.&lt;/p&gt;
58
59 &lt;p&gt;Armed with the API access details, it is time to create the file
60 system. First, create a new bucket in the cloud. This bucket is the
61 file system storage area. I picked a bucket name reflecting the
62 machine that was going to store data there, but any name will do.
63 I&#39;ll refer to it as &lt;tt&gt;bucket-name&lt;/tt&gt; below. In addition, one need
64 the API login and password, and a locally created password. Store it
65 all in ~root/.s3ql/authinfo2 like this:
66
67 &lt;p&gt;&lt;blockquote&gt;&lt;pre&gt;
68 [s3c]
69 storage-url: s3c://s.greenqloud.com:443/bucket-name
70 backend-login: API-login
71 backend-password: API-password
72 fs-passphrase: local-password
73 &lt;/pre&gt;&lt;/blockquote&gt;&lt;/p&gt;
74
75 &lt;p&gt;I create my local passphrase using &lt;tt&gt;pwget 50&lt;/tt&gt; or similar,
76 but any sensible way to create a fairly random password should do it.
77 Armed with these details, it is now time to run mkfs, entering the API
78 details and password to create it:&lt;/p&gt;
79
80 &lt;p&gt;&lt;blockquote&gt;&lt;pre&gt;
81 # mkdir -m 700 /var/lib/s3ql-cache
82 # mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
83 --ssl s3c://s.greenqloud.com:443/bucket-name
84 Enter backend login:
85 Enter backend password:
86 Before using S3QL, make sure to read the user&#39;s guide, especially
87 the &#39;Important Rules to Avoid Loosing Data&#39; section.
88 Enter encryption password:
89 Confirm encryption password:
90 Generating random encryption key...
91 Creating metadata tables...
92 Dumping metadata...
93 ..objects..
94 ..blocks..
95 ..inodes..
96 ..inode_blocks..
97 ..symlink_targets..
98 ..names..
99 ..contents..
100 ..ext_attributes..
101 Compressing and uploading metadata...
102 Wrote 0.00 MB of compressed metadata.
103 # &lt;/pre&gt;&lt;/blockquote&gt;&lt;/p&gt;
104
105 &lt;p&gt;The next step is mounting the file system to make the storage available.
106
107 &lt;p&gt;&lt;blockquote&gt;&lt;pre&gt;
108 # mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
109 --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
110 Using 4 upload threads.
111 Downloading and decompressing metadata...
112 Reading metadata...
113 ..objects..
114 ..blocks..
115 ..inodes..
116 ..inode_blocks..
117 ..symlink_targets..
118 ..names..
119 ..contents..
120 ..ext_attributes..
121 Mounting filesystem...
122 # df -h /s3ql
123 Filesystem Size Used Avail Use% Mounted on
124 s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql
125 #
126 &lt;/pre&gt;&lt;/blockquote&gt;&lt;/p&gt;
127
128 &lt;p&gt;The file system is now ready for use. I use rsync to store my
129 backups in it, and as the metadata used by rsync is downloaded at
130 mount time, no network traffic (and storage cost) is triggered by
131 running rsync. To unmount, one should not use the normal umount
132 command, as this will not flush the cache to the cloud storage, but
133 instead running the umount.s3ql command like this:
134
135 &lt;p&gt;&lt;blockquote&gt;&lt;pre&gt;
136 # umount.s3ql /s3ql
137 #
138 &lt;/pre&gt;&lt;/blockquote&gt;&lt;/p&gt;
139
140 &lt;p&gt;There is a fsck command available to check the file system and
141 correct any problems detected. This can be used if the local server
142 crashes while the file system is mounted, to reset the &quot;already
143 mounted&quot; flag. This is what it look like when processing a working
144 file system:&lt;/p&gt;
145
146 &lt;p&gt;&lt;blockquote&gt;&lt;pre&gt;
147 # fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name
148 Using cached metadata.
149 File system seems clean, checking anyway.
150 Checking DB integrity...
151 Creating temporary extra indices...
152 Checking lost+found...
153 Checking cached objects...
154 Checking names (refcounts)...
155 Checking contents (names)...
156 Checking contents (inodes)...
157 Checking contents (parent inodes)...
158 Checking objects (reference counts)...
159 Checking objects (backend)...
160 ..processed 5000 objects so far..
161 ..processed 10000 objects so far..
162 ..processed 15000 objects so far..
163 Checking objects (sizes)...
164 Checking blocks (referenced objects)...
165 Checking blocks (refcounts)...
166 Checking inode-block mapping (blocks)...
167 Checking inode-block mapping (inodes)...
168 Checking inodes (refcounts)...
169 Checking inodes (sizes)...
170 Checking extended attributes (names)...
171 Checking extended attributes (inodes)...
172 Checking symlinks (inodes)...
173 Checking directory reachability...
174 Checking unix conventions...
175 Checking referential integrity...
176 Dropping temporary indices...
177 Backing up old metadata...
178 Dumping metadata...
179 ..objects..
180 ..blocks..
181 ..inodes..
182 ..inode_blocks..
183 ..symlink_targets..
184 ..names..
185 ..contents..
186 ..ext_attributes..
187 Compressing and uploading metadata...
188 Wrote 0.89 MB of compressed metadata.
189 #
190 &lt;/pre&gt;&lt;/blockquote&gt;&lt;/p&gt;
191
192 &lt;p&gt;Thanks to the cache, working on files that fit in the cache is very
193 quick, about the same speed as local file access. Uploading large
194 amount of data is to me limited by the bandwidth out of and into my
195 house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s,
196 which is very close to my upload speed, and downloading the same
197 Debian installation ISO gave me 610 kiB/s, close to my download speed.
198 Both were measured using &lt;tt&gt;dd&lt;/tt&gt;. So for me, the bottleneck is my
199 network, not the file system code. I do not know what a good cache
200 size would be, but suspect that the cache should e larger than your
201 working set.&lt;/p&gt;
202
203 &lt;p&gt;I mentioned that only one machine can mount the file system at the
204 time. If another machine try, it is told that the file system is
205 busy:&lt;/p&gt;
206
207 &lt;p&gt;&lt;blockquote&gt;&lt;pre&gt;
208 # mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \
209 --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql
210 Using 8 upload threads.
211 Backend reports that fs is still mounted elsewhere, aborting.
212 #
213 &lt;/pre&gt;&lt;/blockquote&gt;&lt;/p&gt;
214
215 &lt;p&gt;The file content is uploaded when the cache is full, while the
216 metadata is uploaded once every 24 hour by default. To ensure the
217 file system content is flushed to the cloud, one can either umount the
218 file system, or ask S3QL to flush the cache and metadata using
219 s3qlctrl:
220
221 &lt;p&gt;&lt;blockquote&gt;&lt;pre&gt;
222 # s3qlctrl upload-meta /s3ql
223 # s3qlctrl flushcache /s3ql
224 #
225 &lt;/pre&gt;&lt;/blockquote&gt;&lt;/p&gt;
226
227 &lt;p&gt;If you are curious about how much space your data uses in the
228 cloud, and how much compression and deduplication cut down on the
229 storage usage, you can use s3qlstat on the mounted file system to get
230 a report:&lt;/p&gt;
231
232 &lt;p&gt;&lt;blockquote&gt;&lt;pre&gt;
233 # s3qlstat /s3ql
234 Directory entries: 9141
235 Inodes: 9143
236 Data blocks: 8851
237 Total data size: 22049.38 MB
238 After de-duplication: 21955.46 MB (99.57% of total)
239 After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated)
240 Database size: 2.39 MB (uncompressed)
241 (some values do not take into account not-yet-uploaded dirty blocks in cache)
242 #
243 &lt;/pre&gt;&lt;/blockquote&gt;&lt;/p&gt;
244
245 &lt;p&gt;I mentioned earlier that there are several possible suppliers of
246 storage. I did not try to locate them all, but am aware of at least
247 &lt;a href=&quot;https://www.greenqloud.com/&quot;&gt;Greenqloud&lt;/a&gt;,
248 &lt;a href=&quot;http://drive.google.com/&quot;&gt;Google Drive&lt;/a&gt;,
249 &lt;a href=&quot;http://aws.amazon.com/s3/&quot;&gt;Amazon S3 web serivces&lt;/a&gt;,
250 &lt;a href=&quot;http://www.rackspace.com/&quot;&gt;Rackspace&lt;/a&gt; and
251 &lt;a href=&quot;http://crowncloud.net/&quot;&gt;Crowncloud&lt;/A&gt;. The latter even
252 accept payment in Bitcoin. Pick one that suit your need. Some of
253 them provide several GiB of free storage, but the prize models are
254 quite different and you will have to figure out what suits you
255 best.&lt;/p&gt;
256
257 &lt;p&gt;While researching this blog post, I had a look at research papers
258 and posters discussing the S3QL file system. There are several, which
259 told me that the file system is getting a critical check by the
260 science community and increased my confidence in using it. One nice
261 poster is titled
262 &quot;&lt;a href=&quot;http://www.lanl.gov/orgs/adtsc/publications/science_highlights_2013/docs/pg68_69.pdf&quot;&gt;An
263 Innovative Parallel Cloud Storage System using OpenStack’s SwiftObject
264 Store and Transformative Parallel I/O Approach&lt;/a&gt;&quot; by Hsing-Bung
265 Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields
266 and Pamela Smith. Please have a look.&lt;/p&gt;
267
268 &lt;p&gt;Given my problems with different file systems earlier, I decided to
269 check out the mounted S3QL file system to see if it would be usable as
270 a home directory (in other word, that it provided POSIX semantics when
271 it come to locking and umask handling etc). Running
272 &lt;a href=&quot;http://people.skolelinux.org/pere/blog/Testing_if_a_file_system_can_be_used_for_home_directories___.html&quot;&gt;my
273 test code to check file system semantics&lt;/a&gt;, I was happy to discover that
274 no error was found. So the file system can be used for home
275 directories, if one chooses to do so.&lt;/p&gt;
276
277 &lt;p&gt;If you do not want a locally file system, and want something that
278 work without the Linux fuse file system, I would like to mention the
279 &lt;a href=&quot;http://www.tarsnap.com/&quot;&gt;Tarsnap service&lt;/a&gt;, which also
280 provide locally encrypted backup using a command line client. It have
281 a nicer access control system, where one can split out read and write
282 access, allowing some systems to write to the backup and others to
283 only read from it.&lt;/p&gt;
284
285 &lt;p&gt;As usual, if you use Bitcoin and want to show your support of my
286 activities, please send Bitcoin donations to my address
287 &lt;b&gt;&lt;a href=&quot;bitcoin:15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&amp;label=PetterReinholdtsenBlog&quot;&gt;15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b&lt;/a&gt;&lt;/b&gt;.&lt;/p&gt;
288 </description>
289 </item>
290
291 <item>
292 <title>EU-domstolen bekreftet i dag at datalagringsdirektivet er ulovlig</title>
293 <link>http://people.skolelinux.org/pere/blog/EU_domstolen_bekreftet_i_dag_at_datalagringsdirektivet_er_ulovlig.html</link>
294 <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/EU_domstolen_bekreftet_i_dag_at_datalagringsdirektivet_er_ulovlig.html</guid>
295 <pubDate>Tue, 8 Apr 2014 11:30:00 +0200</pubDate>
296 <description>&lt;p&gt;I dag kom endelig avgjørelsen fra EU-domstolen om
297 datalagringsdirektivet, som ikke overraskende ble dømt ulovlig og i
298 strid med borgernes grunnleggende rettigheter. Hvis du lurer på hva
299 datalagringsdirektivet er for noe, så er det
300 &lt;a href=&quot;http://tv.nrk.no/program/koid75005313/tema-dine-digitale-spor-datalagringsdirektivet&quot;&gt;en
301 flott dokumentar tilgjengelig hos NRK&lt;/a&gt; som jeg tidligere
302 &lt;a href=&quot;http://people.skolelinux.org/pere/blog/Dokumentaren_om_Datalagringsdirektivet_sendes_endelig_p__NRK.html&quot;&gt;har
303 anbefalt&lt;/a&gt; alle å se.&lt;/p&gt;
304
305 &lt;p&gt;Her er et liten knippe nyhetsoppslag om saken, og jeg regner med at
306 det kommer flere ut over dagen. Flere kan finnes
307 &lt;a href=&quot;http://www.mylder.no/?drill=datalagringsdirektivet&amp;intern=1&quot;&gt;via
308 mylder&lt;/a&gt;.&lt;/p&gt;
309
310 &lt;p&gt;&lt;ul&gt;
311
312 &lt;li&gt;&lt;a href=&quot;http://e24.no/digital/eu-domstolen-datalagringsdirektivet-er-ugyldig/22879592&quot;&gt;EU-domstolen:
313 Datalagringsdirektivet er ugyldig&lt;/a&gt; - e24.no 2014-04-08
314
315 &lt;li&gt;&lt;a href=&quot;http://www.aftenposten.no/nyheter/iriks/EU-domstolen-Datalagringsdirektivet-er-ulovlig-7529032.html&quot;&gt;EU-domstolen:
316 Datalagringsdirektivet er ulovlig&lt;/a&gt; - aftenposten.no 2014-04-08
317
318 &lt;li&gt;&lt;a href=&quot;http://www.aftenposten.no/nyheter/iriks/politikk/Krever-DLD-stopp-i-Norge-7530086.html&quot;&gt;Krever
319 DLD-stopp i Norge&lt;/a&gt; - aftenposten.no 2014-04-08
320
321 &lt;li&gt;&lt;a href=&quot;http://www.p4.no/story.aspx?id=566431&quot;&gt;Apenes: - En
322 gledens dag&lt;/a&gt; - p4.no 2014-04-08
323
324 &lt;li&gt;&lt;a href=&quot;http://www.nrk.no/norge/_-datalagringsdirektivet-er-ugyldig-1.11655929&quot;&gt;EU-domstolen:
325 – Datalagringsdirektivet er ugyldig&lt;/a&gt; - nrk.no 2014-04-08&lt;/li&gt;
326
327 &lt;li&gt;&lt;a href=&quot;http://www.vg.no/nyheter/utenriks/data-og-nett/eu-domstolen-datalagringsdirektivet-er-ugyldig/a/10130280/&quot;&gt;EU-domstolen:
328 Datalagringsdirektivet er ugyldig&lt;/a&gt; - vg.no 2014-04-08&lt;/li&gt;
329
330 &lt;li&gt;&lt;a href=&quot;http://www.dagbladet.no/2014/04/08/nyheter/innenriks/datalagringsdirektivet/personvern/32711646/&quot;&gt;-
331 Vi bør skrote hele datalagringsdirektivet&lt;/a&gt; - dagbladet.no
332 2014-04-08&lt;/li&gt;
333
334 &lt;li&gt;&lt;a href=&quot;http://www.digi.no/928137/eu-domstolen-dld-er-ugyldig&quot;&gt;EU-domstolen:
335 DLD er ugyldig&lt;/a&gt; - digi.no 2014-04-08&lt;/li&gt;
336
337 &lt;li&gt;&lt;a href=&quot;http://www.irishtimes.com/business/sectors/technology/european-court-declares-data-retention-directive-invalid-1.1754150&quot;&gt;European
338 court declares data retention directive invalid&lt;/a&gt; - irishtimes.com
339 2014-04-08&lt;/li&gt;
340
341 &lt;li&gt;&lt;a href=&quot;http://www.reuters.com/article/2014/04/08/us-eu-data-ruling-idUSBREA370F020140408?feedType=RSS&quot;&gt;EU
342 court rules against requirement to keep data of telecom users&lt;/a&gt; -
343 reuters.com 2014-04-08&lt;/li&gt;
344
345 &lt;/ul&gt;
346 &lt;/p&gt;
347
348 &lt;p&gt;Jeg synes det er veldig fint at nok en stemme slår fast at
349 totalitær overvåkning av befolkningen er uakseptabelt, men det er
350 fortsatt like viktig å beskytte privatsfæren som før, da de
351 teknologiske mulighetene fortsatt finnes og utnyttes, og jeg tror
352 innsats i prosjekter som
353 &lt;a href=&quot;https://wiki.debian.org/FreedomBox&quot;&gt;Freedombox&lt;/a&gt; og
354 &lt;a href=&quot;http://www.dugnadsnett.no/&quot;&gt;Dugnadsnett&lt;/a&gt; er viktigere enn
355 noen gang.&lt;/p&gt;
356
357 &lt;p&gt;&lt;strong&gt;Update 2014-04-08 12:10&lt;/strong&gt;: Kronerullingen for å
358 stoppe datalagringsdirektivet i Norge gjøres hos foreningen
359 &lt;a href=&quot;http://www.digitaltpersonvern.no/&quot;&gt;Digitalt Personvern&lt;/a&gt;,
360 som har samlet inn 843 215,- så langt men trenger nok mye mer hvis
361
362 ikke Høyre og Arbeiderpartiet bytter mening i saken. Det var
363 &lt;a href=&quot;http://www.holderdeord.no/parliament-issues/48650&quot;&gt;kun
364 partinene Høyre og Arbeiderpartiet&lt;/a&gt; som stemte for
365 Datalagringsdirektivet, og en av dem må bytte mening for at det skal
366 bli flertall mot i Stortinget. Se mer om saken
367 &lt;a href=&quot;http://www.holderdeord.no/issues/69-innfore-datalagringsdirektivet&quot;&gt;Holder
368 de ord&lt;/a&gt;.&lt;/p&gt;
369 </description>
370 </item>
371
372 <item>
373 <title>ReactOS Windows clone - nice free software</title>
374 <link>http://people.skolelinux.org/pere/blog/ReactOS_Windows_clone___nice_free_software.html</link>
375 <guid isPermaLink="true">http://people.skolelinux.org/pere/blog/ReactOS_Windows_clone___nice_free_software.html</guid>
376 <pubDate>Tue, 1 Apr 2014 12:10:00 +0200</pubDate>
377 <description>&lt;p&gt;Microsoft have announced that Windows XP reaches its end of life
378 2014-04-08, in 7 days. But there are heaps of machines still running
379 Windows XP, and depending on Windows XP to run their applications, and
380 upgrading will be expensive, both when it comes to money and when it
381 comes to the amount of effort needed to migrate from Windows XP to a
382 new operating system. Some obvious options (buy new a Windows
383 machine, buy a MacOSX machine, install Linux on the existing machine)
384 are already well known and covered elsewhere. Most of them involve
385 leaving the user applications installed on Windows XP behind and
386 trying out replacements or updated versions. In this blog post I want
387 to mention one strange bird that allow people to keep the hardware and
388 the existing Windows XP applications and run them on a free software
389 operating system that is Windows XP compatible.&lt;/p&gt;
390
391 &lt;p&gt;&lt;a href=&quot;http://www.reactos.org/&quot;&gt;ReactOS&lt;/a&gt; is a free software
392 operating system (GNU GPL licensed) working on providing a operating
393 system that is binary compatible with Windows, able to run windows
394 programs directly and to use Windows drivers for hardware directly.
395 The project goal is for Windows user to keep their existing machines,
396 drivers and software, and gain the advantages from user a operating
397 system without usage limitations caused by non-free licensing. It is
398 a Windows clone running directly on the hardware, so quite different
399 from the approach taken by &lt;a href=&quot;http://www.winehq.org/&quot;&gt;the Wine
400 project&lt;/a&gt;, which make it possible to run Windows binaries on
401 Linux.&lt;/p&gt;
402
403 &lt;p&gt;The ReactOS project share code with the Wine project, so most
404 shared libraries available on Windows are already implemented already.
405 There is also a software manager like the one we are used to on Linux,
406 allowing the user to install free software applications with a simple
407 click directly from the Internet. Check out the
408 &lt;a href=&quot;http://www.reactos.org/screenshots&quot;&gt;screen shots on the
409 project web site&lt;/a&gt; for an idea what it look like (it looks just like
410 Windows before metro).&lt;/p&gt;
411
412 &lt;p&gt;I do not use ReactOS myself, preferring Linux and Unix like
413 operating systems. I&#39;ve tested it, and it work fine in a virt-manager
414 virtual machine. The browser, minesweeper, notepad etc is working
415 fine as far as I can tell. Unfortunately, my main test application
416 is the software included on a CD with the Lego Mindstorms NXT, which
417 seem to install just fine from CD but fail to leave any binaries on
418 the disk after the installation. So no luck with that test software.
419 No idea why, but hope someone else figure out and fix the problem.
420 I&#39;ve tried the ReactOS Live ISO on a physical machine, and it seemed
421 to work just fine. If you like Windows and want to keep running your
422 old Windows binaries, check it out by
423 &lt;a href=&quot;http://www.reactos.org/download&quot;&gt;downloading&lt;/a&gt; the
424 installation CD, the live CD or the preinstalled virtual machine
425 image.&lt;/p&gt;
426 </description>
427 </item>
428
429 </channel>
430 </rss>