Category Archives: Webscrapping

SPARQL and dbpedia: Getting structured data from wikipedia

I was always wonder if we can extract structured data from Wikipedia. I stumbled up on DBPedia and SPARQL. DBPedia stores Wikipedia data as Dataset and it can be accessed using SPARQL. Let me demonstrate this with an example.

DBPedia has a SPARQL endpoint . And you can use SNORQL for exploring DBPedia. Let us execute the below SPARQL query in SNORQL and notice the resultset that is returned,

PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>

SELECT DISTINCT ?film_title ?star_name
where {?film_title rdf:type <http://dbpedia.org/ontology/Film> .
?film_title  foaf:name ?film_name .
?film_title rdfs:comment ?film_abstract .
?film_title dbpedia-owl:starring ?star .
?star dbpprop:name ?star_name
}
LIMIT 5

I get the results as below,

SPARQL results from DBPedia

SPARQL results from DBPedia

As good place to learn SPARQL is http://answers.semanticweb.com/ .

I hope this article helps you.

Advertisements

HtmlUnit Example for html parsing in Java

In continuation of my earlier blog HtmlUnit vs JSoup, in this blog, I will show you how to write a simple web scraping sample using HtmlUnit. This example will parse html data and get unstructured Web data in a structured format.

In this simple example, we will connect to Wikipedia and get list of all movies and their wikepedia source links. The page looks as below,

HtmlUnit: Screen awards movie list

HtmlUnit: Screen awards movie list

As always let us start with a maven dependency entry in our pom.xml to include HtmlUnit as below,

<dependency>
<groupId>net.sourceforge.htmlunit</groupId>
<artifactId>htmlunit</artifactId>
<version>2.11</version>
</dependency>

Again we will start with a simple JUnit testcase as below,

@Test
public void testBestMovieList() throws FailingHttpStatusCodeException, MalformedURLException, IOException {

final WebClient webClient = new WebClient();
final HtmlPage startPage = webClient.getPage("http://en.wikipedia.org/wiki/Screen_Award_for_Best_Film");

String source = "/html/body/div[3]/div[3]/div[4]/table[2]/tbody/tr[:?:]/td[2]/i/a/@href";
String[] sourceArr = source.split(":");

String title = "/html/body/div[3]/div[3]/div[4]/table[2]/tbody/tr[:?:]/td[2]/i/a/@title";
String[] titleArr = title.split(":");

String titleData = titleArr[0] + 2 + titleArr[2];
String sourceData = sourceArr[0] + 2 + sourceArr[2];
List<DomNode> titleNodes = (List<DomNode>) startPage.getByXPath(titleData);
assertTrue(titleNodes.size() > 0);
List<DomNode> sourceNodes = (List<DomNode>) startPage.getByXPath(sourceData);
assertTrue(sourceNodes.size() > 0);
assertEquals("Hum Aapke Hain Kaun", titleNodes.get(0).getNodeValue());
assertEquals("/wiki/Hum_Aapke_Hain_Kaun", sourceNodes.get(0).getNodeValue());

titleData = titleArr[0] + 3 + titleArr[2];
sourceData = sourceArr[0] + 3 + sourceArr[2];
titleNodes = (List<DomNode>) startPage.getByXPath(titleData);
assertTrue(titleNodes.size() > 0);
sourceNodes = (List<DomNode>) startPage.getByXPath(sourceData);
assertTrue(sourceNodes.size() > 0);
assertEquals("Dilwale Dulhaniya Le Jayenge", titleNodes.get(0).getNodeValue());
assertEquals("/wiki/Dilwale_Dulhaniya_Le_Jayenge", sourceNodes.get(0).getNodeValue());
}

If you notice I am accessing the page http://en.wikipedia.org/wiki/Screen_Award_for_Best_Film which looks as per the above diagram. We are getting the 1st and 2nd movies on the page and JUnit assert for the same and the test succeeds. If you also notice I am using the XPaths to access the elements like /html/body/div[3]/div[3]/div[4]/table[2]/tbody/tr[2]/td[2]/i/a/@title. The way I am extracting the XPath is to use Firebug as per this blog HtmlUnit vs JSoup.

I hope this blog helped you.

HtmlUnit vs JSoup: html parsing in Java

In continuation of my earlier blog Jsoup: nice way to do HTML parsing in Java, in this blog I will compare JSoup with other similar framework, HtmlUnit. Apparently both of them are good Html parsing frameworks and both can be used for web application unit testing and web scraping. In this blog, I will explain how HtmlUnit is better suited for web application unit testing automation and JSoup is better suited for Web Scraping.

Typically web application unit testing automation is a way to automate webtesting in JUnit framework. And web scraping is a way to extract unstructured information from the web to a structured format. I recently tried 2 decent web scraping tools, Webharvy and Mozenda.

For any good Html Parsing tools to click, they should support either XPath based or CSS Selector based element access. There are lot of blogs comparing each one like, Why CSS Locators are the way to go vs XPath, and CSS Selectors And XPath Expressions.

HtmlUnit

HtmlUnit is a powerful framework, where you can simulate pretty much anything a browser can do like click events, submit events etc and is ideal for Web application automated unit testing.

XPath based parsing is simple and most popular and HtmlUnit is heavily based on this. In one of my application, I wanted to extract information from the web in a structured way. HtmlUnit worked out very well for me on this. But the problem starts when you try to extract structured data from modern web applications that use JQuery and other Ajax features and use Div tags extensively. HtmlUnit and other XPath based html parsers will not work with this. There is also a JSoup version that supports XPath based on Jaxen, I tried this as well, guess what? it also was not able to access the data from modern web applications like ebay.com.

Finally my experience with HtmlUnit was it was bit buggy or maybe I call it unforgiving unlike a browser, where in if the target web applications have missing javascripts, it will throw exceptions, but we can get around this, but out of the box it will not work.

JSoup

The latest version of JSoup goes extra length not to support XPath and will very well support CSS Selectors. My experience was it is excellent for extracting structured data from modern web applications. It is also far forgiving if the web application has some missing javascripts.

Extracting XPath and CSS Selector data

In most of the browsers, if you point to an element and right click and click on “Inspect element” it can extract the XPath information, I noticed Firefox/Firebug can also extract CSS Selector Path as shown below,

HtmlUnit vs JSoup: Extract CSS Path and XPath in FireBug

HtmlUnit vs JSoup: Extract CSS Path and XPath in FireBug

I hope this blog helped.

Jsoup: nice way to do HTML parsing in Java

Typically you do HTML parsing in Java for various reasons like JUnit testing, Web Crawling and others. I stumbled across JSoup and tried few things to understand its capabilities. If you do some googling you can come across few good articles in Stackoverflow like, What is a good java web crawler library? and JSoup vs HttpUnit.

I had already worked with HttpUnit extensively. I felt that JSoup is better than HttpUnit. Let me demonstrate few of the capabilities of Jsoup in this blog,

Connecting to any website and parsing the data from that website into a DOM tree is as simple as,

URL url = new URL("http://gosmarter.net?query=cars");
Document doc = Jsoup.parse(url, 3000);

Where the integer value passed in the parse method is the timeout period set to return downloading from the site if it takes more time.

If you want to retrieve a table or a div from the DOM tree you do as below,

Iterator<Element> productList = doc.select("div[class=productList]").iterator();
assertNotNull(productList.hasNext);
while (productList.hasNext()) {
//Do some processing
}

If you want to extract an Image URL you do this way,

Element productLink = product.select("a").first();
String href = productLink.attr("abs:href");

Note in the above code, “abs:href”, will return the absolute url if the path is relative. Also the Element class is jsoup class, this has capabilities like select method, which is used to query based on intelligent jsoup query language. It also has a attr method, where, for a given element we can retrieve a specific attribute, in this example, we are retrieving href attribute of “a” link html tag. The first method returns always the 1st element, if there are lot of “td” or “tr” or a “li” html tag.

You can also get a specific element in a “td” or a “tr” or a “li” html tag as below,

Element descLi = product.select( "li:eq(0)").first();

Note above the select query is requesting 1st element or 0 index element from the list. The syntax is like “li:eq(0)”.

You can retrieve the text within a tag, for example, if you want to retrieve the text in the “a” link html tag, you do as below,

Element descA = product.select( "a").first();
String desc = descA.text();

Note text method is used to retrieve the text.

Finally if you want to retrieve an entire html content of a element you can do as below,

Element descA = product.select( "a").first();
String descHtmlData = descA.html();

Note you use html method to achieve retrieving html content of an element. This is useful for debug purpose.

There is also maven jar available in Apache Maven repository as below,

<dependency>
<groupId>org.jsoup</groupId>
<artifactId>jsoup</artifactId>
<version>1.7.1</version>
</dependency>

I hope this blog helped you.