Yihui Xie http://yihui.name 2013-04-19T12:56:32-07:00 xie@yihui.name Travis CI for R! (not yet) http://yihui.name/en/2013/04/travis-ci-general-purpose/ 2013-04-12T00:00:00-07:00 Yihui Xie http://yihui.name/en/2013/04/travis-ci-general-purpose A few days ago I wrote about Travis CI, and was wondering if we could integrate the testing of R packages into this wonderful platform. A reader (Vincent Arel-Bundock) pointed out in the comments that Travis was running Ubuntu that allows you to install software packages at your will.

I took a look at the documentation, and realized they were building and testing packages in virtual machines. No wonder sudo apt-get works. Remember apt-get -h | tail -n1:

This APT has Super Cow Powers. (APT有超级牛力)

## R on Travis CI

Now we are essentially system admins, and we can install anything from Ubuntu repositories, so it does not really matter that Travis CI does not support R yet. Below are a few steps to integrate your R package (on Github) into this system:

1. follow the official guide util you see .travis.yml;
2. copy my .travis.yml for the knitr package if you want, or write your own;
• I use a custom library path ~/R to install add-on R packages so that I do not have to type sudo everywhere
• at the moment I use the RDev PPA by Michael Rutter to install R 3.0.0 since his plan for R 3.0 on CRAN is in May; at that time I'll change this PPA to a CRAN repository
• since R CMD check requires all packages in Suggests as well, I install knitr using install.packages(dep = TRUE) to make sure all relevant packages are installed
• make install and make check are wrappers of R CMD build and R CMD check respectively, defined in the Makefile
3. push this .travis.yml to Github, and Travis CI will start building your package when a worker is available (normally within a few seconds);

By default you will receive email notifications when there are changes in the build. You can also find the guide on the build status image in the documentation as well, e.g.

What I described here actually applies to any software packages (not only R), as long as the dependencies are available under Ubuntu, or you know how to build them.

## But it is still far from CRAN

OK, it works, but we are still a little bit far from what CRAN does, because Travis CI does not have official support for R. Each time we have to install one Gigabyte of additional software to create the R testing environment (sigh, if only R did not have to tie itself to LaTeX). If these packages are pre-built in the virtual machines, it will save us a lot of time.

The second problem is, there is no Windows support on Travis CI (one developer told us on Twitter that it was coming). There is a page for OS X, but I did not really figure out how to build software under OS X there.

The third problem is Travis CI only builds and tests packages; it does not provide downloads like CRAN. Perhaps we can upload the packages using encryption keys to our own servers.

## R-Forge, where are you going?

I will shut up here since I realized I was not being constructive. Let me spend more time thinking about this, and I love to hear suggestions from readers as well.

So, two potential Google Summer of Code projects:

• make R an officially supported language on Travis CI (this really depends on if the Travis team want it or not)
• improve R-Forge (of course this depends on if the R-Forge team think they need help or not)
]]>
Travis CI for R? http://yihui.name/en/2013/04/travis-ci-for-r/ 2013-04-07T00:00:00-07:00 Yihui Xie http://yihui.name/en/2013/04/travis-ci-for-r I'm always worried about CRAN: a system maintained by FTP and emails from real humans (basically one of Uwe, Kurt or Prof Ripley). I'm worried for two reasons:

1. the number of R packages is growing exponentially;
2. time and time again I see frustrations from both parties (CRAN maintainers and package authors);

I have a good solution for 2, which is to keep silent when your submission passes the check system, and say "Sorry!" no matter if you agree with the reason or not when it did not pass (which made one maintainer unhappy), but do not argue -- just go back and fix the problem if you know what is the problem; or use dark voodoo to hide (yes, hide, not solve) the problem if you are sure you are right. If you read the mailing list frequently, you probably remember that if (CRAN) discussion. The solution in my mind was if (Sys.getenv('USER') == 'ripley').

The key is, do not argue. Silence is gold.

The CRAN maintainers have been volunteering their time, and we should respect them. The question is, will this approach scale well with the growth of packages? Or who should be in charge of R CMD check?

We, the poor authors, cannot guaranttee that every time our packages can pass CRAN's machines due to all kinds of reasons. Some problems are actually easy to fix without a real human yelling at us. On the other hand, if the package fortunately passes R CMD check, we do not really need an email from a real human acknowledging "thanks, on CRAN now".

Travis CI is an excellent platform for continuous integration of software packages. You do not need to interact with a real person by email -- each time you push to Github, your package will be automatically built and checked. If there are problems, you will be notified automatically.

A similar platform in the R world is Bioconductor. It has the best two components in software development: version control (although sadly SVN) and continuous checking. I do not know if CRAN will catch up one day. I'm not very optimistic about it; perhaps a more realistic approach is to start a Google Summer of Code project on introducing R into Travis CI. I have no idea how difficult that will be, but I will definitely be thrilled if it comes true this year.

Anyone?

Update on 04/16/2013: just to clarify, what Bioconductor does is not strictly continuous integration (yet) in the sense that it builds packages daily instead of immediately on changes.

]]>
On ENAR, or Statistical Meetings in General http://yihui.name/en/2013/03/on-enar-or-statistical-meetings-in-general/ 2013-03-14T00:00:00-07:00 Yihui Xie http://yihui.name/en/2013/03/on-enar-or-statistical-meetings-in-general Last year I accepted an invitation from Ben to go to ENAR 2013 -- my first ENAR. I used to go to JSM and useR!, and apparently I enjoy useR! most. The reason is not, or not only, because I'm more of a technical person. It is just hard to concentrate at large statistical conferences. I want to make a few suggestions from the perspective of a student, although it is unlikely that any future conference chairs will come here and listen to me:

1. Go green and get rid of printed programs. A program book is thick and clunky and nobody will take it with them when they leave. The hundreds of pages of paper will only end up in garbage. If you have to print them, print N/5 copies instead (N is the number of participants) and let the participants share with each other.
2. Improve the websites and add social network features. For example, we can "reserve" the talks which we are interested, so all participants immediately know which are the popular and highly expected talks, and organizers can schedule appropriate rooms. The discussion session by John Chambers, Duncan Temple Lang, Thomas Lumley and Michael Lawrence at the last JSM in San Diego was a failure in the sense that many people were standing outside of the room (presumably fans of John Chambers). By comparison, my session at ENAR was assigned to a room of (more than?) 400 seats but only 20 people showed up.
3. If we tell the organizers which sessions we plan to go, the program book can be a lot thinner! If there are changes in these sessions, only a small group of people need to be notified. If you notify everybody by inserting a few pieces of announcements, it is a waste of time of most people.
4. One thing I really wish to have for conferences is I want to know which people in my "circle" are also going there. It is hard to go through 1000 names of participants and spot some familiar names. I met a friend at ENAR who collaborated with me (translating an English R tutorial to Chinese) almost 10 years ago but we have never really met in person so we do not know each other. I did not know he was coming as well. I was just sitting on the sofa in a corner and he randomly saw my badge. We were so excited that we finally met in such an unexpected place.
5. If participants can make connections with each other beforehand, it is likely to save us money as well -- we can share the costs when we rent cars, take cabs, book hotels and so on.
6. So please do not charge us upon registration -- give us a deadline and charge us later. Perhaps I will change my mind later if I cannot find enough interesting people to meet, or the popular talks seem irrelevant to me.
7. I heard from Hadley that some IEEE conferences require the speakers to do a 30-second talk before the conferences, and I think that will be cool and useful for statistical conferences as well. Nowadays I still hear certain speakers read their slides word by word. Some speakers may be shy or are not confident in their oral English, but I do not think the language problem is a really big problem. My suggestion to these speakers is to spend more time preparing jokes instead of the slides: jokes make the audience concentrate and speakers relax. I have told a lot of stupid jokes that I regret afterwards, but I think the net effect is still positive.
8. If it is not possible to arrange 30-second talks, the conference website should allow speakers upload mini versions of their talks to attract more audience.
9. Some people go to conferences for both presentations and sightseeing. Personally I do not care about the latter at all, but unfortunately all big conferences take place in famous big cities. This ENAR was held in the largest Marriott in the world. Am I proud of that? No, not at all, because I had to live in a much cheaper hotel three miles away. One evening I tried to walk back and it took me one hour and twenty minutes. What is more, this place is not really walkable -- I had to walk on the grass on the roadside for half an hour because there was no pavement! I do not really mind walking (even for three miles), but it is not a happy memory walking on the grass. The Marriott was such a closed universe that it was hard to walk out, as Karl's picture shows below:

By comparison, useR! conferences often take place in a university campus. Last year it was in Vanderbilt, and they provided dorms to students. I lived happily in the dorm, because all I wanted was a place to sleep (there was free wireless too); nothing luxury. Usually there are also inexpensive restaurants on campus. I met helpful local students/researchers there who gave me free ride to some scenic spots, and I had to fight to pay my tickets by myself (but was still treated). It was just a touching trip and I managed to make acquaintance quite a few interesting people (Frank Harrell, Bill Venables and Kevin Coombes, etc).

ENAR "kindly" included a ticket for the Epcot theme park in the registration fee. What did I do in the theme park? I had a dinner in an expensive restaurant (Karl felt guilty for taking me there and generously treated me), and watched a 10-minute fireworks show. Yes, we were so nerdy that we kept on discussing the role of measure theory and Github in a place where we were supposed to say hello to the Mickey mouse.

So my major suggestion to the big statistical meetings is, create an environment which emphasizes the communication among people and do not include distracting activities in the registration package by default. There are a couple of small things you can do, for example:

1. More seats in the open place so we can sit and chat.
2. Free beer. I'm sure it is more doable than an Epcot ticket.
3. Always print the participant name on both sides of the badge. You know how stupid it is to show a blank side of your badge to other people (and the badge always flips to the damn wrong side, always, flips!!), especially given that statisticians are socially awkward and feel embarassed to ask other people for names.
4. Choose a university campus instead of Marriott. If you do not know how to choose one, choose Iowa State University then. I'm sure all participants will be highly concentrated unless they are interested in seeing corns and pigs on the farms.

Okay, rants ended. Positive energy coming.

As I said on Twitter, I was happy to meet Karl Broman (who introduced Matthew Stephens to me later, the smartest person in statistics and human genetics according to Karl) and John Muschelli there. I noticed Karl long time ago, mainly due to Top ten worst graphs, but have never met him.

I did not know much about the Johns Hopkins biostat department before I visited them last year, and it has become a place that surprises me more and more. It is a weird and crazy department. I like people for bizarre reasons. For instance, I like Jeff Leek because he prefers steak to be well done (me too). Rafa hides jokes under his very serious-looking face. "Behind the Tan Door" is the best video in the history of statistics. Karl has a series of hilarious stories about the JHSPH logo. There are people in the world that you know for sure you will be excited to meet. Currently I have yet another person on my list: Tyler Rinker.

In case you have not seen it, I strongly recommend you read (and apply for) the postdoctoral fellow position in reproducible research at Hopkins. Note in particular the phrase "serious moxie"!!

So do I regret ENAR? Certainly no. If I can organize my time more efficiently meeting more people like those above, it will be even better. BTW, if you are not the 20 people in my session, feel free to check out my slides on knitr (Brian Bot simply called them "animated gifs" instead of "slides").

]]>
Contribute to The R Journal with LyX/knitr http://yihui.name/en/2013/02/contribute-to-the-r-journal-with-lyx-knitr/ 2013-02-17T00:00:00-08:00 Yihui Xie http://yihui.name/en/2013/02/contribute-to-the-r-journal-with-lyx-knitr (This paragraph is pure rant; feel free to skip it) I have been looking forward to the one-column LaTeX style of The R Journal, and it has arrived eventually. Last time I mentioned "it does not make sense to sell the cooked shrimps"; actually there is another thing that does not make sense in my eyes, which is the two-column LaTeX style. I just hate it. Two-column may save a little bit space in typesetting compared to one-column, but it brings huge inconvenience to the readers who do not have a big enough screen. For each single page, you have to scroll down to read the left column, scroll back and up to read the right column, then scroll down... So you just scroll up and down, up and down, ... until you are bored by this PITA.

I have ported the new RJournal.sty to LyX, and you can find the relevant files in my lyx repository. To write articles in LyX with knitr, check out or download the repository and follow these steps:

1. Find out what is your User directory from the LyX menu: Help --> About
2. From my repository, copy the layouts folder to your user directory;
3. Download RJournal.sty from the R Journal website and put it in your texmf tree so that LaTeX can find it (this might be the most challenging step if you do not know enough about LaTeX, and I do not want to explain this painful topic);
4. (For Windows users only) make sure R is in your PATH (again this is a painful topic that I hate to explain) and install.packages('knitr') in R;
5. From LyX, click Tools --> Reconfigure and restart LyX;

Now you should be able to open templates/RJournal.lyx and compile it. I have made a quick video of the process below:

So you have no execuse to escape reproducible research! It is even easier than writing in Word to contribute a reproducible article to The R Journal now.

P.S. I will try to submit this new layout file RJournal.layout as well as the template RJournal.lyx to the LyX development team if I do not hear any problems from users.

]]>
Publishing from R+knitr to WordPress http://yihui.name/en/2013/02/publishing-from-r-knitr-to-wordpress/ 2013-02-10T00:00:00-08:00 Yihui Xie http://yihui.name/en/2013/02/publishing-from-r-knitr-to-wordpress Tal Galili asked a question on StackOverflow on publishing blog posts to WordPress from R + knitr. William K. Morris has written a solution long time ago, and I tweaked it a little bit and created a function knit2wp() in the development version of knitr.

See the page for WordPress in the knitr website for details. Below is a sample screenshot:

P.S. I meant to write a separate post about this, but I probably will not find time -- in case you have not noticed, the Journal of Statistical Software ranked the first recently in terms of the impact factor in the category of "Statistics & Probability":

Although everybody agrees the impact factor is nonsense, I think this is still an indication of the impact of open-access journals. If I cannot read the full content of a paper from my Google search, I will not bother to read it at all. No, I'm lazy and I will not go the library.

]]>
Code Pollution With Command Prompts http://yihui.name/en/2013/01/code-pollution-with-command-prompts/ 2013-01-26T00:00:00-08:00 Yihui Xie http://yihui.name/en/2013/01/code-pollution-with-command-prompts This is not the first time I have ranted about command prompts, but I cannot help ranting about them whenever I saw them in source code. In short, a piece of source code with command prompts is like a bag of cooked shrimps in the market -- it does not make sense, and an otherwise good thing is ruined. I like cooking raw shrimps (way more tasty).

The command prompt here refers to the characters you saw in commands as the prefix, which indicates "I'm ready to take your commands". In a Unix shell, it is often $; in R, it is > by default. A shell example from the GTK web page: $ jhbuild bootstrap
$jhbuild build meta-gtk-osx-bootstrap$ jhbuild build meta-gtk-osx-core


And an R example from R and Data Mining:

> data("bodyfat", package = "mboost")
> str(bodyfat)


There are numerous examples like this on the web and in the books (sorry, GTK developers and Yanchang Zhao; I came to you randomly). Most people seem to post their code like this. I can probably spend half a day figuring out a problem in measure theory, but I cannot, even if I spend two years, figure out why people include command prompts when publishing source code.

Isn't it too obvious that you are wasting the time of your readers?

Whenever the command prompts are present in the source code, the reader has to copy the code, remove the prompts, and use the code. Why there has to be an additional step of removing the prompts? Why cannot you make your code directly usable to other people?

jhbuild bootstrap
jhbuild build meta-gtk-osx-bootstrap
jhbuild build meta-gtk-osx-core

data("bodyfat", package = "mboost")
str(bodyfat)


I'm aware of the column selection mode in some editors. I just do not understand why the correct thing should not happen in the very beginning.

Some may argue the prompt helps typesetting, and it makes the code stand out because of the common prefix. In R, + means the last line is not complete, so > and + present the structure of the code, e.g.

> for (i in 1:10) {
+ print(i)
+ }


I believe the structure of code should be presented by the level of indentation, which does not hurt the source code. To make the code stand out, choose a background color (shading) for it. That is the only correct way for typesetting purposes.

So, please stop code pollution, and post usable code.

]]>

Update on 2013/01/05: Xiao Nan in the comments pointed out that apply(combn(letters, 2), 2, paste0, collapse = '') was wrong for all two-letter usernames, and indeed it was. It is not a combination problem. Now I use his elegant outer() solution. One can also use expand.grid(letters, letters).

Github decided to take off their downloads service, and I was very unhappy with this decision. This means I have to migrate several files to other places, and update links accordingly. I saw Bitbucket still provides the service, so I want to migrate my files there.

Sadly my name yihui was already taken on Bitbucket, so I hoped I could get a short name, which made me think how I could check the availability of a username via programming, and here was my solution with the RCurl package:

library(RCurl)
test_user = function(site = 'https://bitbucket.org/',
candidates = c(0:9, letters)) {
for (i in candidates) {
if (!url.exists(paste0(site, i))) message(i)
Sys.sleep(runif(1, 0, .1)) # be nice
}
}
# examples
test_user()
# two-letter names
test_user(candidates = as.vector(outer(letters, letters, 'paste0')))
# check github
test_user('https://github.com/')


As of the time of this blog post, there are no two-letter usernames left on Github, but some are still available on Bitbucket, e.g. by and eq, etc, and the number 4 is also available.

]]>
ICERM Reproducibility Workshop: Day 1 http://yihui.name/en/2012/12/icerm-reproducibility-workshop-day-1/ 2012-12-10T00:00:00-08:00 Yihui Xie http://yihui.name/en/2012/12/icerm-reproducibility-workshop-day-1 I'm attending a workshop on reproducibility at ICERM (Brown University) this week. I really appreciate this great opportunity offered by ICERM, Randy and Victoria.

It is pretty exciting to meet people that you only knew before through indirect ways. One coincidence was that I met Fernando here (for the first time)! We did not know each other before I wrote the IPython post back in November, and I did not expect that we would meet each other so soon. Anyway, it is great to see this extremely energetic guy in person. Nerdy as I am, I immediately asked him (did I even say hello?) how IPython saves and displays plots, and he quickly showed me on the whiteboard.

Some simple notes as bullet points:

• One big question about reproducible research is, where is the reward system? e.g. why should young researchers spend more time on making papers reproducible instead of publishing more papers? There seems to be no immediate reward (although some argue that papers with data/code available have higher citation rates)
• You should look at the Fortran code blackened out in Bill Rider's slides; it is not that people do not want to share...
• Victoria has a nice historical review of reproducible research
• I learned MacTutor from Jon Borwein's talk, which seems to be a good old website like Wikipedia (for Math)
• I enjoyed the talk by David Donoho most; they wrote a paper "Deterministic Matrices Matching the Phase Transitions of Gaussian Random Matrices", and what is truly amazing is that this paper was done through "crowd sourcing"; guess who is the "crowd"? His Stat330/CME362 students (as well as the TA)! They used Dropbox, GIT, runmycode and clusters. The three big points:
• Math as science: mathmaticians should learn science and scientific publication (emphasis on publishing empirical results)
• Research as teaching: teaching can be turned into research; see the above paper
• Code development as science: I especially resonate with this point -- code development actually has mature models and practices for a long time, which should be the ideal (or standard) paradigm of doing science; it is rare to see an open source software package published with one single final version; instead we often see versions (version control and semantic versioning) which mark the progress of the package, and we have a full history of how it was written, but papers almost always only have one version
• I learned HOL light from Tom Hales' talk, which is a computer program for proving theorems (does it help me write my PhD thesis?)
• David Bailey talked about High-Precision Computation and Reproducibility; I'm not familiar with this area but the talk is very interesting, e.g. a change in the float-point library can lead to different observations of particles in physics (some particles might have "gone" after you replace the library); I did not realize numeric precision has such a profound influence

Keep an eye on the workshop website if you are interested.

BTW, to follow up David's crowd sourcing, my advisor Di Cook did something similar earlier this year but less seriously: the students who took Stat585 at Iowa State collaborated on Github for a fiction in statistics when we were learning GIT in that class, which was actually a lot of fun...

]]>
IPython vs knitr, or Python vs R http://yihui.name/en/2012/11/ipython-vs-knitr/ 2012-11-23T00:00:00-08:00 Yihui Xie http://yihui.name/en/2012/11/ipython-vs-knitr I watched this video by Fernando Pérez a few days ago when I was reading a comment by James Correia Jr on Simply Statistics:

This is absolutely a fantastic talk that I recommend everybody to watch (it is good in both the form and content). Not surprisingly, I started thinking ipython vs knitr. Corey Chivers said we could learn a lot from each other, and that is definitely true on my side, since ipython is so powerful now.

## ipython and knitr

I did not take a close look at ipython when I was designing knitr because I'm still at the "hello-world" level in Python, and I did not realize until I watched the video that we ended up with some common features like:

• support to Markdown/MathJax (to be fair, MathJax is RStudio and markdown instead of knitr's contribution), not to mention HTML and LaTeX
• multi-language integration: *magic in ipython (rmagic, octavemagic, ...) vs engines in knitr (python, ruby, bash, C++, ...)
• support to D3 (knitr example; I did not find a live example with ipython at the moment)

Obviously knitr is still much weaker than ipython in many aspects, but some aspects do not really hurt; for example, the user interface. IPython enables one to write in the browser, which looks cool and is indeed useful. We (Carlos, Simon and I) had a similar attempt called RCloud in the summer this year when I was doing intern at AT&T Labs, which was a combination of Rserve, FastRWeb, knitr, markdown and a bunch of JS libraries. The user interface is pretty much like ipython; in fact, it was inspired by ipython.

The RCloud project is not completely done yet, but I believe RStudio has done a fair amount of work to make the user interface more friendly, so I'm not terribly worried.

## Python community

That being said, I felt overwhelmed when I saw the Emacs client for ipython in the talk. On one hand,

(I wrote that by Google translation; not sure if it is accurate; I mean in terms of nationality, Japanese programmers have surprised me most; examples include the Ruby language, Kohske Takahashi and kaz_yos)

On the other hand, the R community is still too small compared to Python. I have been looking forward to the R Markdown support in Emacs/ESS. The infrastructure on R's side has been ready for quite a while. ESS developers have been working hard, but we just need more force to spin R to a higher level in a more timely fashion (embrace the web server, EC2, D3, web sockets, Julia and all the cool stuff; not only generalized linear models).

## R community

Small as it is, the R community is also moving to interesting directions. I especially agree with Jeff Horner on his recent post Innovation in Statistical Computing that RStudio has been making remarkable and innovative contributions to the R community. I think one thing important is that RStudio developers are not statisticians like R core. The R community absolutely needs this kind of fresh power: good sense of user interface, good knowledge of modern computing technologies and most important of all, good project/product managers.

The shiny package is yet another example besides what were mentioned in Jeff's post. I think it is interesting to compare shiny with Rserve, FastRWeb, gWidgetsWWW(2), rApache, Rook and older packages like CGIwithR. From the technical point of view, each package in the latter group may be more complicated than shiny (Simon, Jeff and John are extremely smart guys), but apparently shiny has become the Gangam style of R web apps. Most users will not, nor do they, care about the technology behind the package. A developer may feel unfair that the user only sees the nice twitter bootstrap style, without noticing the websockets, but that is just the fact.

I always regard R as a general computing language instead of for statistics only. We need more geeks in the R community both to understand non-statistical technologies such as Emacs Lisp, JavaScript and Httpd, etc, and to connect them to certain aspects of (computational) science such as reproducible research.

## Misc

When I saw the 3D barplot in the talk, I feel R graphics will be able to survive longer for a while.

]]>
Can We Live Without Backslashes? http://yihui.name/en/2012/10/lyx-vs-latex/ 2012-10-30T00:00:00-07:00 Yihui Xie http://yihui.name/en/2012/10/lyx-vs-latex Two months ago there was a discussion in the ESS mailing list about Emacs/ESS started by Paul Johnson, who claimed "Emacs Has No Learning Curve". While this sounds impossible, he really has some good points, e.g. he encourages beginners to look at the menu. I think this is a good way to learn Emacs. In the very beginning, who can remember C-x C-f? If one finds it hard even to open a file, how is one supposed to move on? (I taught my wife Emacs/ESS, unfortunately, in the wrong way)

Then I went off-topic by discussing Emacs vs LyX as the editor for LaTeX, and I quoted an example by Chris Menzel. This is LyX:

And this is Emacs:

The difference is obvious. I find it difficult to understand why there are so many people who enjoy reading LaTeX code, which makes my eyes bleed. My experience of promoting LyX was often unsuccessful, and I feel some people were born to hate GUI. Paul Thompson, the last person in that discussion thread, wrote these final comments:

There is no such thing as LaTeX without \

Sorry, folks, but that whole idea is whack. Totally whack. It's like onions without tears. Onions have good qualities, but there is a cost. LaTeX has good qualities. The cost is \.

If you want formatting without \, use Word. You don't see a single \ but you don't see anything of any value.

The whole POINT of a markup language is to see the markup. If you don't want to see the markup, use a WYSIWYG editor like Word.

Editors which do LaTeX hiding the markup are just another WYSIWYG 3rd order approximation.

Frankly speaking, I agree with none of the above comments. LaTeX is excellent, but that does not mean I must read the backslashes for the rest of my life. LyX has done a wonderful job of hiding \ while giving us full power of LaTeX. If there is anything that cannot be done with GUI in LyX, we always have the ERT (Evil Red Text), i.e. we can input raw LaTeX code in LyX (Ctrl + L). LyX gives us a human-readable interface to LaTeX without sacrificing anything. The whole point of a markup language is not to read the source. If we are to see anything, we see the output. For example, we read a web page instead of its HTML code.

It is undeniable that sometimes we need to read the source code for other purposes such as debugging. LyX has a menu View => View Source to allow one to view the LaTeX source of the current document, which is something I often do. I appreciate the importance of source code, but LaTeX is an exception in my eyes. With LyX, I do not have to start every single LaTeX document with \documentclass{} and remember all the gory commands about the geometry or hyperref package (top margin, inner margin, outer margin, page width, bookmarks, etc etc). That is why I managed to write the documentation of my R packages quickly (e.g. knitr, formatR and Rd2roxygen, etc).

Finally, I do not recommend LaTeX novices to try LyX. It is a bad idea in general to use GUI if you do not really understand it.

I know this promotion for LyX is, as usual, not going to help. People simply walk by.

Image via. Also see Changing just a tiny little bit in my LaTeX tabular.

]]>