New rOpenSci Packages for Text Processing in R

  Jeroen Ooms   | JUNE 13, 2017

Textual data and natural language processing are still a niche domain within the R ecosytstem. The NLP task view gives an overview of existing work however a lot of basic infrastructure is still missing. At the rOpenSci text workshop in April we discussed many ideas for improving text processing in R which revealed several core areas that need improvement: Reading: better tools for extracing text and metadata from documents in various formats (doc, rtf, pdf, etc).

Release 'open' data from their PDF prisons using tabulizer

  Thomas J. Leeper   | APRIL 18, 2017

There is no problem in science quite as frustrating as other peoples’ data. Whether it’s malformed spreadsheets, disorganized documents, proprietary file formats, data without metadata, or any other data scenario created by someone else, scientists have taken to Twitter to complain about it. As a political scientist who regularly encounters so-called “open data” in PDFs, this problem is particularly irritating. PDFs may have “portable” in their name, making them display consistently on various platforms, but that portability means any information contained in a PDF is irritatingly difficult to extract computationally.

Introducing pdftools - A fast and portable PDF extractor

  Jeroen Ooms   | MARCH 1, 2016

Scientific articles are typically locked away in PDF format, a format designed primarily for printing but not so great for searching or indexing. The new pdftools package allows for extracting text and metadata from pdf files in R. From the extracted plain-text one could find articles discussing a particular drug or species name, without having to rely on publishers providing metadata, or pay-walled search engines. The pdftools slightly overlaps with the Rpoppler package by Kurt Hornik.

Page 1 of 1