Index documents content with Solr and Tika

I’ve blogged in the past about indexing entire folders of documents with solr and Tika with Data Import Handler. This approach has pro and cons. On the good side, once you’ve understand the basics, setting everything up and running is a matter of a couple of hours max, on the wrong side, using a DIH gives you little controls over the entire process.

As an example, I’ve had problem with folder with jpg images, because the extractor crashes due to a missing library. If you do not configure correctly the import handler, every error stops the entire import process. Another problem is that document content is not subdivided into pages even if Tika can give you this kind of information. Finally, you need to have all of your documents inside a folder to be indexed. In real situation it is quite often preferable to have more control over the index process. Lets examine how you can use tika from your C# code.

The easiest way is directly invoking the tika.jar file with Java, it is quick and does not requires any other external library, just install java and uncompress tika in a local folder.

public TikaDocument ExtractDataFromDocument(string pathToFile)
{

    var arguments = String.Format("-jar \"{0}\" -h \"{1}\"", Configuration.TikaJarLocation, pathToFile);

    using (Process process = new Process())
    {
        process.StartInfo.FileName = Configuration.JavaExecutable;
        process.StartInfo.Arguments = arguments;
        process.StartInfo.WorkingDirectory = Path.GetDirectoryName(pathToFile);
        process.StartInfo.WindowStyle = ProcessWindowStyle.Minimized;
        process.StartInfo.UseShellExecute = false;
        process.StartInfo.ErrorDialog = false;
        process.StartInfo.CreateNoWindow = true;
        process.StartInfo.RedirectStandardOutput = true;
        var result = process.Start();
        if (!result) return TikaDocument.Error;
        var fullContent = process.StandardOutput.ReadToEnd();
        return new TikaDocument(fullContent);
    }

}

This snippet of code simply invoke Tika passing the file you want to analyze as argument, it uses standard System.Diagnostics.Process .NET object and intercept all standard output to grab Tika output. This output is parsed with an helper object called TikaDocument that takes care of understanding how the document is structured. If you are interested in the code you can find everything in the included sample, but it is just a matter of HTML parsing with HtmlAgilityToolkit. ES.

Meta = new MetaHelper(meta);
var pagesList = new List<TikaPage>();
Pages = pagesList;
Success = true;
FullHtmlContent = fullContent;
HtmlDocument doc = new HtmlDocument();
doc.LoadHtml(fullContent);
FullTextContent = HttpUtility.HtmlDecode(doc.DocumentNode.InnerText);

var titleNode = doc.DocumentNode.SelectSingleNode("//title");
if (titleNode != null) 
{
    Title = HttpUtility.HtmlDecode(titleNode.InnerText);
}


var pages = doc.DocumentNode.SelectNodes(@"//div[@class='page']");
if (pages != null)
{
    foreach (var page in pages)
    {
        pagesList.Add(new TikaPage(page));
    }
}
var metaNodes = doc.DocumentNode.SelectNodes("//meta");
if (metaNodes != null)
{
    foreach (var metaNode in metaNodes)
    {

Thanks to TikaDocument class you can index content of single pages, in my example I simply send to Solr the entire content of the document (I do not care subdividing document into pages). This is the XML message for standard document update

public System.Xml.Linq.XDocument SolarizeTikaDocument(String fullPath, TikaDocument document)
{
    XElement elementNode;
    XDocument doc = new XDocument(
        new XElement("add", elementNode = new XElement("doc")));

    elementNode.Add(new XElement("field", new XAttribute("name", "id"), fullPath));
    elementNode.Add(new XElement("field", new XAttribute("name", "fileName"), Path.GetFileName(fullPath)));
    elementNode.Add(new XElement("field", new XAttribute("name", "title"), document.Title));
    elementNode.Add(new XElement("field", new XAttribute("name", "content"), document.FullTextContent));
    return doc;
}

To mimic how DIH works, you can use File System Watcher to monitor a folder, and index the document as soon some of the documents gets updated or added. In my sample I only care about file being added to the directory,

static void watcher_Created(object sender, FileSystemEventArgs e)
{
    var document = _tikaHandler.ExtractDataFromDocument(e.FullPath);
    var solrDocument = _solarizer.SolarizeTikaDocument(e.FullPath, document);
    _solr.Post(solrDocument);
}

This approach is more complex than using a plain DIH but gives you more control over the entire process and it is also suitable if documents are stored inside databases or in other locations.

Code is available here: http://sdrv.ms/17zKJdL

Gian Maria.

Index a folder of multilanguage documents in Solr with Tika

Previous Posts on the serie

Everything is up and running, but now requirements change, documents can have multiple languages (italian and english in my scenario) and we want to do the simplest thing that could possibly work. First of all I change the schema of the core in solr to support language specific fields with wildcards.

image

Figure 1: Configuration of solr core to support multiple language field.

This is a simple modification, all fields are indexed and stored (for highlighting) and multivalued. Now we can leverage another interesting functionality of Solr+Tika, an update handler that identifies the language of every document that got indexed. This time we need to modify solrconfig.xml file, locating the section of the /update handler and modify in this way.

<requestHandler name="/update" class="solr.UpdateRequestHandler">
   <lst name="defaults">
	 <str name="update.chain">langid</str>
   </lst>
   
</requestHandler>

<updateRequestProcessorChain >
  <processor name="langid" class="org.apache.solr.update.processor.TikaLanguageIdentifierUpdateProcessorFactory">
	<lst name="defaults">
	  <bool name="langid">true</bool>
	  <str name="langid.fl">title,content</str>
	  <str name="langid.langField">lang</str>
	  <str name="langid.fallback">en</str>
	  <bool name="langid.map">true</bool>
	  <bool name="langid.map.keepOrig">true</bool>
	</lst>
  </processor>
  <processor class="solr.LogUpdateProcessorFactory" />
  <processor class="solr.RunUpdateProcessorFactory" />
</updateRequestProcessorChain>

I use a TikaLanguageIndentifierUpdateProcessorFactory to identify the language of documents, this runs for every documents that gets indexed, because it is injected in the chain of UpdateRequests. The configuration is simple and you can find full details in solr wiki. Basically I want it to analyze both the title and content field of the document and enable mapping of fields. This means that if the document is detected as Italian language it will contain content_it and title_it fields not only content field. Thanks to previous modification of solr.xml schema to match dynamicField with the correct language all content_xx files are indexed using the correct language.

This way to proceed consumes memory and disk space, because for each field I have the original Content stored as well as the content localized, but it is needed for highlighting and makes my core simple to use.

Now I want to be able to do a search in this multilanguage core, basically I have two choices:

  • Identify the language of terms in query and query the correct field
  • Query all the field with or.

Since detecting language of term used in query gives a lots of false positive, the secondo technique sounds better. Suppose you want to find italian term “tipografia”, You can issue query: content_it:tipografia OR content_en:tipografia. Everything works as expected as you can see from the following picture.

image

Figure 2: Sample search in all content fields.

Now if you want highlights in the result, you must specify all localized fields, you cannot simply use Content field. As an example, if I simply ask to highlight the result of previous query using original content field, I got no highlight.

image

Figure 3: No highlighting found if you use the original Content field.

This happens because the match in the document was not an exact match, I ask for word tipografia but in my document the match is on the term tipografo, thanks to language specific indexing Solr is able to match with stemming, this a typical full text search. The problem is, when is time to highlight, if you specify the content field, solr is not able to find any match of word tipografia in it, so you got no highlight.

 To avoid problem, you should specify all localized fields in hl parameters, this has no drawback because a single document have only one non-null localized field and the result is the expected one:

image

Figure 4: If you specify localized content fields you can have highlighting even with a full-text match.

In this example when is time to highlight Solr will use both content_it and content_en. In my document content_en is empty, but Solr is able to find a match in content_it and is able to highlight with the original content, because content_it has stored=”true” in configuration.

Clearly using a single core with multiple field can slow down performances a little bit, but probably is the easiest way to deal to index Multilanguage files  automatically with Tika and Solr.

Gian Maria.

Installing Solr on Tomcat on windows, Error solr SEVERE: Error filterStart

If you are used in installing Solr in Windows environment and you install for the first time a version greater than 4.2.1 you can have trouble in letting your Solr server to start. The symptom is: service is stopped in Tomcat Application Manager and if you press start you got a simple error telling you that the application could not start.

To troubleshoot these kind of problems, you can go to Tomcat Log directory and looking at Catilina log, but usually you probably find a little information there.

Mar 06, 2014 7:02:07 PM org.apache.catalina.core.StandardContext startInternal
SEVERE: Error filterStart
Mar 06, 2014 7:02:07 PM org.apache.catalina.core.StandardContext startInternal
SEVERE: Context [/solr47] startup failed due to previous errors

The reason of this is a change in the logging subsystem done in version 4.2.1, that is explained in the installation guide: Switching from Log4J back to JUL. I’ve blogged about this problem in the past, but it seems to still bite some person so it worth spending another post on the subject. The solution is in the above link, but essentially you should open the folder where you unzipped solr distribution, go to the solr/example/ext and copy all jar files you find there inside Tomcat Lib subdirectory.

image

Figure 1: Jar files needed by Solr to start

After you copied these jar files into Tomcat lib directory you should restart Tomcat and now Solr should starts without problem.

image

Figure 2: Et Voilà, Solr is started.

Gian Maria.

Relations with not-found=”ignore” disable lazy load and impact on performances

NHibernate has a lot of interesting and specific option for mapping entities that can really cover every scenario you have in mind, but you need to be aware of every implication each advanced option has on performances.

If you are in a legacy-database scenario where entity A reference Entity B, but someone outside the control of NHibernate can delete record from table used by Entity B, without setting the corresponding referencing field on Entity A. We will end with a Database with broken reference, where rows from Table A references with a field id a record in Table B that no longer exists. When this happens, if you load an Entity of type A that reference an Entity of type B that was deleted, it will throw an exception if you try to access navigation property, because NHibernate cannot find related entity in the Database.

If you know NHibernate you can use the not-found=”Ignore” mapping option, that basically tells NHibernate to ignore a broken reference key, if EntityA references an Entity B that was already deleted from database, the reference will be ignored, navigation property will be set to Null, and no exception occurs. This kind of solution is not without side effects, first of all you will find that Every time you load an Entity of Type A another query is issued to the database to verify if related Entity B is really there. This actually disable lazy load, because related entity is always selected. This is not an optimum scenario, because you will end with a lot of extra query and this happens because not-found=”ignore” is only a way to avoid a real problem: you have broken foreign-key in your database.

My suggestion is, fix data in database, keep the database clean without broken foreign-keys and remove all not-found=”ignore” mapping option unless you really have no other solution. Please remember that even if you are using NHibernate, you should not forget SQL capabilities. As an example SQL Server (and quite all of the relational database in the market) has the ability to setup rules for foreign-key, es ON DELETE SET NULL that automatically set to null a foreign key on a table, when related record is deleted. Such a feature will prevent you from having broken foreign key, even if some legacy process manipulates the database deleting records without corresponding update in related foreign-key.

Gian Maria.

Install Solr 4.3, pay attention to log libraries

After I configured Solr 4.3 on a Virtual Machine (side by side with a 4.0) it refuses to start, and the only error I have in catilina log files is

SEVERE: Error filterStart

This leaved me puzzled, but thanks to Alexandre and the exceptional Solr Mailing list I was directed toward the solution. Solr 4.3 changed logging mechanism; and in this link http://wiki.apache.org/solr/SolrLogging#What_changed you can read about what changed and how to enable logging for Solr 4.3.

It turns out that I’ve entirely missed this step

  • Copy the jars from solr/example/lib/ext into your container’s main lib directory. For tomcat this is usually tomcat/lib. These jars will set up SLF4J and log4j.

And this is the only reason why my Solr Instance refused to start, after libs are inside Tomcat/lib everything works as expected. It could be not your problem, but once logging libraries are there, surely you will get a better log that will help you troubleshoot why Solr refuses to start.

Gian Maria