2017/05/22

collusion - groundwork

Some of you may already have heard that I've recently joined Neo Technology. Actually nobody knows the company with that name and you'd be even more surprised to learn the name of the Swedish mother-company (I definitely was) ... but if I tell you their flagship product is Neo4j ... bells might start ringing.

I am not going to start a Practical Neo4j blog. That's completely unnecessary, half the company has such a blog and you can find excellent ones here, here and here (I randomly picked three, I could easily have picked twenty). No lack of brains and no lack of creative writers.

Much more interesting is writing about the collusion between the two worlds, the collusion of resource oriented computing and graph databases.

Admit it, you expected a Trump-related image for "collusion", didn't you ?
This is actually not the first time I'm writing about Neo4j in this blog. I already did that in 2012 and that was definitely a talking point when going through the application process. More recently I've explored the use of NetKernel for the publishing of RDF-data. There is a NetKernel framework for that and several implementations can be found on Github.

So, what did I do next to going through the basics of graph theory (check out the Youtube videos of Dr. Sarada Herke and forget what a pile of crap Youtube usually is) and tons of technical Neo4j documents in the past couple of weeks ? Well, I started on another NetKernel framework obviously. The idea is that I want a second implementation of KBOData, this time using Neo4j. As an extra goal I want the framework to be so dynamic that I can actually point it to any dataset (loaded into Neo4j).

A bit of groundwork is required (and everything can be found on Github). The urn.com.ebc.tool.system module now contains an EnvironmentAccessor which allows you to turn an operating system variable into a resource. This is actually something that was developed for the Flemish government Milieuinfo site (I'll present that when it is moved to production but it is very much like KBOData or Stelselcatalogus).

The EnvironmentAccessor is extensively used in the urn.com.ebc.neo4j.database module (which defines the neo4j:databaseurl, neo4j:databaseuser and neo4j:databasepassword resources). We no longer require an application-specific module, the operating system environment determines where we will point our requests.

Next up is the urn.org.neo4j.driver module which repackages the Neo4j Java driver. As per usual you will not find the actual jar-file in the module. You can find it here and you need to drop it in the lib-subdirectory of the module.

Based on the driver the urn.com.ebc.neo4j.client module can issue a Cypher request to a Neo4j database-server (I'm not working embedded this time round) and return the result. Currently a bare-bones-work-in-progress RowsAccessor is available. More work is to be done here.

A urn.com.ebc.neo4j.fulcrum module has been created to provide the HTTP server for the framework. A fixed port (8500) has been set for the server but as you know that can be overridden on the commandline so it does not violate our everything should be dynamic principle.

Last but not least I've started work on the urn.com.ebc.neo4j.server module. Note once more that this is not an application-specific module, the idea is that it can and should be completely agnostic of the underlying database. For starters it already has the capability to determine all possible node-labels in the database and each and every node can already be queried as a resource (and is served in the HTTP server)) : res:/node/<label>/<id>.

Not a bad start if I say so myself and there's more to come in the next weeks, so stay tuned !


P.S. I do realize that having a neo4j:databasepassword resource is not actually the most secure option. I'm working on a better solution that a) does not violate the everything should be dynamic principle and b) does not require Kerberos or LDAP or anything else that is only included in the Neo4j Enterprise Edition.

P.P.S. As goes for an SQL-endpoint and a SPARQL-endpoint, a Cypher-endpoint is only as good as the query you launch through it. If you want to bring the database down, you can. There is still a need for a more general solution for this. I wonder how the fragment-idea is coming along ...

P.P.P.S If you're interested in seeing my first show-and-dance for Neo4j, you're most welcome on the Amsterdam GraphDay on June 6th. I'm doing the Introductory Training Session in the afternoon.

2016/12/02

Into the wasteland

However strange this may sound, Resource Oriented Computing is not used everywhere yet. While this is only a matter of time of course, it may cause some inconvenience during that period.

Hence the need for transports. Transports bring external events into the ROC abstraction and take responses back out.


http://www.deviantart.com/art/Wasteland-Truck-521012152

The obvious choice when it comes to transports is http. And sure, out-of-the-box NetKernel comes with two http-transports enabled, one running on port 8080, also known as the Frontend Fulcrum and one running on port 1060, also known as the Backend Fulcrum. All other transports are pretty much forgotten/overlooked.

To change that we are going to take a look at the SMTPTransport today. In human words, I am going to send mails to NetKernel and I want NetKernel to do something with them. To make things even more interesting, the body of the mail is going to be a fully specified declarative request (no worries, I'll show an example further on) that I want NetKernel to execute.

You will need to install the email-core module for this. This is not installed by default but it is readily available in Apposite and has been there for quite a long time. After installing it you can take a look at the documentation here.

In short, the SMTPTransport listens on port 25000 (you can configure this) and turns incoming mails into smtp:message requests. We can very easily handle those with a mapper.

<mapper>
 <config>
  <endpoint>
   <grammar>
    <active>
     <identifier>smtp:message</identifier>
     <argument name="from" desc="from address" min="1" max="1" />
     <argument name="to" desc="to address" min="1" max="1" />
    </active>
   </grammar>

   <request>
    <identifier>active:groovy</identifier>
    <argument name="operator">res:/resources/groovy/message.groovy</argument>
    <argument name="from" method="as-string">[[arg:from]]</argument>
    <argument name="to" method="as-string">[[arg:to]]</argument>
   </request>
  </endpoint>
 </config>
 <space>
  <endpoint>
   <id>com:ebc:smtp:requestprocessor:transport</id>
   <prototype>SMTPTransport</prototype>
   <private/>
  </endpoint>

  <import>
   <uri>urn:com:ebc:smtp:requestprocessor:import</uri>
   <private/>
  </import>
 </space>
</mapper>

Like I said, what we want to do is treat the incoming message-body as a declarative request. All that takes in the way of code is this :

// arguments
Document aBodyDoc = (Document)aContext.source("emailMessage:/part/0/body",Document.class);
//

// processing
ILogger vLogger=aContext.getKernelContext().getKernel().getLogger();
RequestBuilder vBuilder = new RequestBuilder(aBodyDoc.getDocumentElement(), vLogger);
INKFRequest vRequest = vBuilder.buildRequest(aContext,null,null);
aContext.issueAsyncRequest(vRequest);
//

// response
INKFResponse vResponse = aContext.createResponseFrom("smtp request processed");
vResponse.setMimeType("text/plain");
vResponse.setExpiry(INKFResponse.EXPIRY_ALWAYS);
//

I admit, this is not production-ready code (no error handling for one), but it does the job quite nicely. What remains to be shown is my import rootspace :

<rootspace
name="ebc smtp requestprocessor import"
public="false"
uri="urn:com:ebc:smtp:requestprocessor:import">

<fileset>
<!-- contains groovy resources -->
<regex>res:/resources/groovy/.*</regex>
</fileset>

<import>
<!-- contains SMTPTransport -->
<uri>urn:org:netkernel:email:core</uri>
</import>

<import>
<!-- contains DOMXDAParser -->
<uri>urn:org:netkernel:xml:core</uri>
</import>

<import>
<!-- contains active:groovy -->
<uri>urn:org:netkernel:lang:groovy</uri>
</import>

<import>
<!-- contains SimpleImportDiscovery, DynamicImport -->
<uri>urn:org:netkernel:ext:layer1</uri>
</import>

<endpoint>
<prototype>SimpleImportDiscovery</prototype>
<grammar>active:SimpleImportDiscovery</grammar>
<type>smtprequestprocessor</type>
</endpoint>

<endpoint>
<prototype>DynamicImport</prototype>
<config>active:SimpleImportDiscovery</config>
</endpoint>

</rootspace>

Lets quickly run through that. Groovy module and sources ... check. The email-core module ... check. So what do we need the xml-core for ? Well, actually the email message body is a binary stream. And as you can see in our code we need an Element (that would be a org.w3c.dom.Element). Transreption comes to our aid and the xml-core module contains the transreptor we need.

All the above speaks for itself but you may wonder why I have a dynamic import set up in there. Well, remember that we want to execute requests ? Here's an example :

<request>
    <identifier>active:csvfreemarkerasync</identifier>
    <argument method="as-string" name="in">file:/C:/nkwork/UKCOMPANY/input/BasicCompanyData-2016-11-01-part1_5.csv</argument>
    <argument method="as-string" name="out">file:/C:/nkwork/UKCOMPANY/output/company_1_5.ttl</argument>
    <argument method="as-string" name="template">file:/C:/nkwork/UKCOMPANY/template/company.freemarker</argument>
    <argument method="as-string" name="separator">,</argument>
    <header name="forget-dependencies">
        <literal type="boolean">true</literal>
    </header>
    <representation>java.lang.String</representation>
</request>

There is however no way that my message handler is going to be able to launch that request, active:csvfreemarkerasync in this case, without having access to it. Now, there's several ways we can make this happen ... I rather like the dynamic import way.

So how does it work ? Launch your module and it will go and listen at port 25000. To send it a mail we'll go commandline. For Windows, we'll use CMail.

cmail -from:practical.netkernel@gmail.com -to:practical.netkernel@gmail.com -host:localhost:25000 -body-file:request.txt

The file request.txt obviously contains ... the request. There is no more to it. By the way, with powershell you can even do it without an extra tool :

$EmailFrom = "practical.netkernel@gmail.com"
$EmailTo = "practical.netkernel@gmail.com"
$Subject = "sending a request to netkernel"
$Body = [IO.File]::ReadAllText(".\request.txt")
$SMTPServer = "localhost"
$SMTPClient = New-Object Net.Mail.SmtpClient($SmtpServer, 25000)
$SMTPClient.EnableSsl = $false
$SMTPClient.Credentials = New-Object System.Net.NetworkCredential("none", "needed");
$SMTPClient.Send($EmailFrom, $EmailTo, $Subject, $Body)

For Linux too there are many many many ways to do this (the following requires heirloom-mailx) :

cat /var/tmp/request.txt | mailx -v -S smtp="localhost:25000" practical.netkernel@gmail.com

To close this post, if you are wondering why I'm using commandline-tools ... well the above makes adding NetKernel requests to your automation-chain (or DevOpsBot, or whatever) very simple.

2016/11/23

apt repository

I could not afford to wait long with rolling out an Apt repository for NetKernel 6.1.1. I'm happy to announce the wait is now over !

As with RHEL, there are two repositories available. This is again due to the possible difference in startup-scripts, there is no difference whatsoever with regards to NetKernel itself.

So, if your Debian-flavoured distribution is using SysV-style startup-scripts (the scripts live in /etc/init.d) you want to pick the 14.04 repository below. If it is using systemd startup-scripts, pick the 16.04 repository below.

In case you're wondering what the 14.04 and 16.04 stand for ... those are the Ubuntu LTS distributions that I tested on (in fact, since 15.04 Ubuntu is using systemd). As you'll understand I don't have every Debian-flavoured distribution available for testing (nor would I want to), but if your system is using Apt, chances are good the below will work for you.

On 14.04
Create a new repository entry (or have your System Administrator do this) in /etc/apt/sources.list :
## NetKernel
deb
http://tomgeudens.io:8400/Debian/ 14.04/


In order to be able to work with this repository, you will have to add the public key to apt :
wget http://tomgeudens.io:8400/Debian/keys/aptrepo_public.key
sudo apt-key add aptrepo_public.key

Next, you update your apt cache :

sudo apt-get update

If all goes well, you should be able to install NetKernel :

sudo apt-get install netkernel-se

Now you've got a completely functional NetKernel 6.1.1. SE on your system. It lives in /opt/netkernel and has it's own service user, netkernel. It's not running yet though, because this is where some configuration comes in (see my previous post).


When you're ready to roll, this is what remains to be done :

sudo update-rc.d netkernel enable
sudo service netkernel start



On 16.04

Create a new repository entry (or have your System Administrator do this) in /etc/apt/sources.list :
## NetKernel
deb
http://tomgeudens.io:8400/Debian/ 16.04/


The other steps are the same and when you're ready to roll, this is what remains to be done :

sudo systemctl enable netkernel
sudo systemctl start netkernel



Remarks

  1. You'll notice that the installation already contains the latest updates to NetKernel 6.1.1. The idea is that my repositories will always (give or take a couple of days, I'm only human) contain a version that does not require updates from Apposite.
  2. Apt has recently raised the ante on security requirements for signing the repository. The above repositories are both compliant with those requirements (and should therefore not give warnings when you update the cache).


2016/11/13

yum repository

With NetKernel 6.1.1. now out in the wild, all reasons are of course valid to install it at your local shop and show it off. Now, I don't know about your daily job, but mine is in a bank/insurance environment (at the moment, I'm a freelance consultant) and they are not so happy with :
  • downloading a jar
  • copying the jar to a target system
  • manually installing the jar
While these steps are trivial in themselves, they are often considered unprofessional/untrustworthy.

So, while solving this problem for myself I decided to solve it for everybody else too. I'm therefore proud to present the NetKernel Yum Repository today. At the moment it contains a NetKernel 6.1.1. SE rpm, an EE rpm will soon be added.

How does it work ?

On RHEL6 / CentOS 6 
Create a new repository definition file (or have your System Administrators create this, strangely enough this will rarely be considered a problem), for example /etc/yum.repos.d/netkernel.repo with the following content :

[netkernel]

name=NetKernel Yum Repository

baseurl=http://tomgeudens.io:8400/RHEL/6/$basearch

gpgcheck=1

repo_gpgcheck=1

gpgkey=http://tomgeudens.io:8400/RHEL/6/$basearch/yumrepo_public.key

enabled=1


Notice that both the rpms in the repository and the repository itself are signed, no messing around with security, I've got to convince the security team of a bank here !

Next you should be able to search for NetKernel and install it (this may require you to download the yumrepo_public.key, just say yes when asked for it) :

sudo yum search netkernel
======= N/S Matched: netkernel ========
netkernel-se.x86_64 : NetKernel Standard Edition


sudo yum install netkernel-se
================================================================================
 Package              Arch           Version            Repository         Size
================================================================================
Installing:
 netkernel-se         x86_64         6.1.1-el6          netkernel          26 M

Transaction Summary
================================================================================
Install       1 Package(s)

Total download size: 26 M
Installed size: 39 M
Is this ok [y/N]: y
Downloading Packages:
netkernel-se-6.1.1-el6.x86_64.rpm                        |  26 MB     00:04    
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : netkernel-se-6.1.1-el6.x86_64                                1/1
  Verifying  : netkernel-se-6.1.1-el6.x86_64                                1/1

Installed:
  netkernel-se.x86_64 0:6.1.1-el6                                              

Complete!


Now you've got a completely functional NetKernel 6.1.1. SE on your system. It lives in /opt/netkernel and has it's own service user, netkernel. It's not running yet though, because this is where some configuration comes in (pretty much as you would also not start an Apache server before configuring it) :

  • At some shops logs are kept in a central place. The /opt/netkernel/log directory is empty at this point and can therefore easily be relinked to a central location.
  • You might want to change the JVM settings in /opt/netkernel/bin/jvmsettings.cnf. Memory is cheap, so add lots ...
  • NetKernel (as it is set up here) requires access to the Apposite repositories. This may require you to review and change the HTTP Proxy Settings in /opt/netkernel/etc/kernel.properties.
  • ...
When you're ready to roll, this is what remains to be done :

sudo chkconfig netkernel on
sudo service netkernel start

 
On RHEL7 / CentOS 7
There are a few (small) differences on the more recent RedHat-flavoured systems. The idea of runlevels and such has practically disappeared (some traces are left for backwards compatibility). So the package is a bit different. Lets start with creating a new repository definition file :

[netkernel]

name=NetKernel Yum Repository

baseurl=http://tomgeudens.io:8400/RHEL/7/$basearch

gpgcheck=1

repo_gpgcheck=1

gpgkey=http://tomgeudens.io:8400/RHEL/7/$basearch/yumrepo_public.key

enabled=1


The yum-commands to search and install are identical, you will notice that the rpm that is installed is netkernel-se-6.1.1-el7.x86_64.rpm.

The configuration is also identical, to get things started however these are the commands :

sudo systemctl enable netkernel
sudo systemctl start netkernel


Questions
  1. Does the package verify the Java 1.8.x requirement ?
    No, it does not at the moment, I'm still figuring out how I can cater for both OpenJDK and Oracle (and possibly others) at the same time.
  2.  I'm running on Debian/Ubuntu, do you have an Apt repository available ?
    Not yet, but watch this space.
  3. I notice you only have a x86_64 package available, can you also provide a i386 package ?
    Yes I can, that's on my todo-list.
  4. Look, my shop really does not allow me to punch through the firewall to the Apposite repository. Can you provide a package that contains a few extra modules (x, y and z) and does not phone home ?
    Well, if you really need that, contact me with your requirements and I'll see what I can come up with :
    tom<dot>geudens<at>hush<dot>ai
    practical<dot>netkernel<at>gmail<dot>com


2016/11/04

six one one

Today sees the release of NetKernel 6.1.1. For those of you who know what NetKernel is, this is a good time to revisit it, for those of you who don't know ... this is an excellent time to check it out !

Did you know ... 
  • That NetKernel is not just another Java Framework ?
    Follow the above link to find out what NetKernel is !
  • That NetKernel is not just another application server ?
    Follow the above link
    find out what NetKernel is !
  • That NetKernel is a great choice for RDF (and Open Data in general) processing and publishing ? There are several existing implementations already and several more in the works !
  • That you can find some of my working examples on Github ?

2014/11/08

ten

When writing you occasionally make bold statements. Rarely does somebody call my bluff. But somebody did and therefore I have to show you ten significantly different hello world examples.

One
<literal type="string" uri="res:/tomgeudens/helloworld-literal">Hello World</literal>

As I've said before, we should have that one in the list of Hello World programs.

Two
<accessor>
    <id>tomgeudens:helloworld:java:accessor</id>
    <class>org.tomgeudens.helloworld.HelloWorldAccessor</class>
    <grammar>res:/tomgeudens/helloworld-java</grammar>
</accessor>


For the code itself I refer to earlier blogposts.

Three
<accessor>
    <id>tomgeudens:helloworld:groovy:accessor</id>
    <prototype>GroovyPrototype</prototype>
    <script>res:/resources/groovy/helloworld.groovy</script>
    <grammar>res:/tomgeudens/helloworld-groovy</grammar>

</accessor>

Which requires the script and the language import
<literal type="string" uri="res:/resources/groovy/helloworld.groovy">
    context.createResponseFrom("Hello World");
</literal>


<import>
    <!-- contains GroovyPrototype -->
    <uri>urn:org:netkernel:lang:groovy</uri>
</import>


Four
<mapper>
    <config>
        <endpoint>
            <grammar>res:/tomgeudens/helloworld-data</grammar>
            <request>
                <identifier>data:text/plain,Hello World</identifier>
            </request>
        </endpoint>
    </config>
    <space>

        <import>
            <!-- contains data:/ scheme -->
            <uri>urn:org:netkernel:ext:layer1</uri>
        </import>

    </space>
</mapper>


Five
<mapper>
    <config>
        <endpoint>
            <grammar>res:/tomgeudens/helloworld-file</grammar>
            <request>
                <identifier>file:/c:/temp/helloworld.txt</identifier>
            </request>
        </endpoint>
    </config>
    <space>

        <import>
            <!-- contains file:/ scheme -->
            <uri>urn:org:netkernel:ext:layer1</uri>
        </import> 

    </space>
</mapper>


Of course you need to replace the identifier with an existing file of your own.

Six
<mapper>
    <config>
        <endpoint>
            <grammar>res:/tomgeudens/helloworld-fileset</grammar>
            <request>
                <identifier>res:/resources/txt/helloworld.txt</identifier>
            </request>
        </endpoint>
    </config>
    <space>

        <fileset>
            <regex>res:/resources/txt/.*</regex>
        </fileset>

    </space>
</mapper>


Seven
<mapper>
    <config>
        <endpoint>
            <grammar>res:/tomgeudens/helloworld-freemarker</grammar>
            <request>
                <identifier>active:freemarker</identifier>
                <argument name="operator">data:text/plain,${one} ${two}</argument>
                <argument name="one">data:text/plain,Hello</argument>
                <argument name="two">data:text/plain,World</argument>
            </request>
        </endpoint>
    </config>
    <space>

        <import>
            <!-- contains active:freemarker -->
            <uri>urn:org:netkernel:lang:freemarker</uri>
        </import>
 

        <import>
            <!-- contains data:/ scheme -->
            <uri>urn:org:netkernel:ext:layer1</uri>
        </import> 

    </space>
</mapper>


Eight
<mapper>
    <config>
        <endpoint>
            <grammar>res:/tomgeudens/helloworld-http</grammar>
            <request>
                <identifier>http://localhost:8080/tomgeudens/helloworld-literal</identifier>
            </request>
        </endpoint>
    </config>
    <space>

        <import>
            <!-- contains http:/ scheme -->
            <uri>urn:org:netkernel:client:http</uri>
        </import>

    </space>
</mapper>


Which requires that the first example is exposed on the frontend fulcrum.
 
Nine
<mapper>
    <config>
        <endpoint>
            <grammar>res:/tomgeudens/helloworld-xpath</grammar>
            <request>
                <identifier>active:xpath</identifier>
                <argument name="operand">
                    <literal type="xml">
                        <document>Hello World</document>
                    </literal>
                </argument>
                <argument name="operator">
                    <literal type="string">string(/document)</literal>
                </argument>
            </request>
        </endpoint>
    </config>
    <space>

        <import>
            <!-- contains active:xpath -->
            <uri>urn:org:netkernel:xml:core</uri>
        </import>

    </space>
</mapper>


Ten
<mapper>
    <config>
        <endpoint>
            <grammar>res:/tomgeudens/helloworld-dpml</grammar>
            <request>
                <identifier>active:dpml</identifier>
                <argument name="operator">res:/resources/dpml/helloworld.dpml</argument>
            </request>
        </endpoint>
    </config>
    <space>

        <literal type="xml" uri="res:/resources/dpml/helloworld.dpml">
            <sequence>
                <literal assignment="response" type="string">Hello World</literal>
            </sequence>
        </literal>


        <import>
            <!-- contains active:dpml -->
            <uri>urn:org:netkernel:lang:dpml</uri>
        </import>

    </space>
</mapper>



And there you go, ten resource oriented hello world examples. There are many more possibilities but I think the above show both that there is a lot available and at the same time show that the patterns are always the same. Enjoy.



2014/10/24

back to the beginning ... async 101

Even the most humble of modern laptops today has multiple cores at its disposal. When you work Resource Oriented you benefit from the fact that resource requests are automatically spread over the available cores. However within one (root) request you typically make subrequests sequentially. In most cases this is exactly what you want as one subrequest provides the input for the next ... and so on.

There are cases however where you can benefit from parallel processing. A webpage, for example, can be composed from several snippets which can be requested in parallel. In a previous post I discussed the XRL language :

<html xmlns:xrl="http://netkernel.org/xrl">
    <xrl:include identifier="res:/elbeesee/demo/xrl/header" async="true"/>
    <xrl:include identifier="res:/elbeesee/demo/xrl/body" async="true"/>
    <xrl:include identifier="res:/elbeesee/demo/xrl/footer" async="true"/>
</html>


Another use case for parallel processing is batch processing. In my last post I developed an active:csvfreemarker component. It applies a freemarker template to every csv row in an input file and writes the result to an output file. It works. However, the files I want processed contain millions of rows and applying a freemarker template does take a bit of time. Can parallel processing help ? Yes it can ! Here's the revelant bit of code :

while(vCsvMap != null) {
    int i = 0;
    List<INKFAsyncRequestHandle> vHandles = new ArrayList<INKFAsyncRequestHandle>();

    while( (vCsvMap != null) && (i < 8) ) {
        INKFRequest freemarkerrequest = aContext.createRequest("active:freemarker");
        freemarkerrequest.addArgument("operator", "res:/resources/freemarker/" + aTemplate + ".freemarker");
        for (Map.Entry<String,String> vCsvEntry : vCsvMap.entrySet()) {
            freemarkerrequest.addArgumentByValue(vCsvEntry.getKey().toUpperCase(), vCsvEntry.getValue());
        }
        freemarkerrequest.setRepresentationClass(String.class);
        INKFAsyncRequestHandle vHandle = aContext.issueAsyncRequest(freemarkerrequest);
        vHandles.add(vHandle);

        vCsvMap = vInReader.read(vHeader);
        i = i + 1;
    }
    for (int j=0; j<i; j++) {
        INKFAsyncRequestHandle vHandle = vHandles.get(j);
        String vOut = (String)vHandle.join();
        vOutWriter.append(vOut).append("\n");
    }

}

The freemarker requests are issued as async requests in groups of eight. Their results are then processed in order in the for-loop.

Why eight ? That number depends on several things. The number of cores available, the duration of each async request, ... You'll need to experiment a bit to see what fits your environment/requirements. So actually the number should not be hard-coded. Bad me.