[PragmaticWeb] Faceted Search
Sebastian Samaruga
cognescent at gmail.com
Mon Dec 28 01:40:11 CET 2015
Martynas,
Following your advice I'm trying to get rid the most possible of a custom
object model and try to build something with XSL/XSLT.
I've posted in my blog the last comments regarding this issue:
http://cognescent.blogspot.com.ar/2015/12/cognescents-architectural-refactorings.html
The source repository is not updated with this current changes. It's only
analysis work.
I'd like to use some kind of functional programming for a declarative
service/interface model. My approach currently attempts to return triples,
wrappers between. But with a working metamodel behind them the declarative
part should work fine.
On Sun, Dec 6, 2015 at 10:59 AM, Martynas Jusevičius <martynas at graphity.org>
wrote:
> Sebastian,
>
> I've looked at the code on Git. I think we've gone through this design
> phase as well: a custom object model.
>
> If you constrain yourself only to the standard RDF model and
> manipulate it using SPARQL (or an RDF API such as Jena or Sesame),
> then you can map the rest of data models into it using vocabularies.
>
> These class names look like they're taken from the Jena's codebase,
> just oversimplified and with much poorer functionality:
>
> http://sourceforge.net/p/cognescent/repository/ci/master/tree/src/net/java/cognescent/sbi/model/onto/
>
> In other words, domain model at the object layer is not necessary at
> all if you use RDF. And custom APIs are not necessary as you can reuse
> generic open-source libraries. Constraint fosters creativity :)
>
> They say Linked Data should not be treated as a hammer for which every
> problem appears as a nail. But it's the first hammer which includes a
> manual on how to turn problems into nails -- and them you hammer them
> down with ease :)
>
>
> Martynas
> graphityhq.com
>
> On Fri, Dec 4, 2015 at 5:55 PM, Sebastian Samaruga <cognescent at gmail.com>
> wrote:
> > Martynas,
> >
> > The sources are in the Git repository of the Sourceforge project. They
> are
> > in the form of an importable Eclipse project, it is not yet a Maven
> project.
> > Once you import the project in Eclipse from the repository (from the
> > repository browser of Eclipse put "Import as a project in the workspace")
> > you must set some library paths to make it work in your environment.
> >
> > I'm not using the relational model, it is just an analogy.
> >
> > Sebastian.
> >
> >
> >
> >
> > On Fri, Dec 4, 2015 at 5:30 AM, Martynas Jusevičius <
> martynas at graphity.org>
> > wrote:
> >>
> >> Sebastian,
> >>
> >> as I mentioned before, a .war file is a poor channel of distribution
> these
> >> days. Please show us your source code and/or deployed application.
> >>
> >> Are you using the relational model or your own metamodel for data
> >> integration? This paper explains the advantage of RDF and triples as the
> >> canonical model:
> >>
> >>
> http://www.topquadrant.com/docs/whitepapers/information-enlightenment-2.0-final.pdf
> >>
> >> By putting RDF at the center, you could reuse the RESTful Linked Data
> >> processing from Graphity and focus on BI which can be implemented with
> >> SPARQL.
> >>
> >>
> >> Martynas
> >> graphityhq.com
> >>
> >> On Wed, 2 Dec 2015 at 20:20, Sebastian Samaruga <cognescent at gmail.com>
> >> wrote:
> >>>
> >>> Martynas,
> >>>
> >>> Happy you replied my message. I've been watching Graphity and it is
> >>> awesome. But, although our scopes seems to be similar I think there
> are huge
> >>> differences at implementation level and at presentation level.
> >>>
> >>> What I would like to have built is a 'semantic' browser focused in
> >>> analysis and mining for business intelligence and for building indexes
> and
> >>> process flows definitions. I use forms for that and I also use XSL but
> for
> >>> data coming from my metamodel, serialized from JAXB beans. My
> metamodel is a
> >>> completely normalized scheme in which 'tables', 'columns' and 'values'
> are
> >>> interchangeable entities aggregated into 'rows' (Mappings). It comes
> from a
> >>> variety of sources, including but not just RDF. The metamodel has a
> notion
> >>> of triples regarding the application state and I use them when triple
> >>> exchange is needed.
> >>>
> >>> I'm aiming to be able, for example, to build queries with analysis
> >>> metadata from the URIs given to a REST service. Being this service a
> HATEOAS
> >>> service, a client could browse through the use cases / contexts only
> with
> >>> metadata. Loaders from multiple sources (JDBC, RDF, XML, JSON, CSV,
> SPARQL,
> >>> etc) exist to populate the mappings layer. The browser UI aggregates
> >>> 'tables' (metaclasses), once a metaclass is selected it shows its
> 'columns'
> >>> (roles) and selecting a 'column' shows the column values. If I select
> one
> >>> 'row' while navigating (instances) the values shown will be only the
> ones of
> >>> that instance (Mapping).
> >>>
> >>> You can check it in Sourceforge
> >>> http://sourceforge.net/projects/cognescent/
> >>>
> >>> The WAR file downloadable there shows examples with a sample load data
> >>> form. Modify datasources.xml in the src root to test other
> datasources. The
> >>> XSL templates (Resumen button in the browser) only outputs the raw
> XML. Once
> >>> the stylesheets are developed they'll bring a cleaner knowledge view.
> >>>
> >>> A sample URI to check in the REST interface would be like (dummy):
> >>>
> >>>
> http://localhost:8080/CognescentSBI/resources/data/metaId:instId[predAxisId]/instId:instId[predAxisId]/instId:instId[predAxisId]
> >>>
> >>> To test the REST service go to:
> >>> [your_server:your:port]/CognescentSBI/resources/sbi/data
> >>>
> >>> The predicate axis IDs (predAxisId) will be for the implementation of
> the
> >>> Analyzer component so it can aggregate instances due to this predicates
> >>> values. The URI is parsed to build the faceted queries.
> >>>
> >>> Best,
> >>> Sebastian.
> >>>
> >>>
> >>> On Wed, Nov 18, 2015 at 1:29 PM, Martynas Jusevičius
> >>> <martynas at graphity.org> wrote:
> >>>>
> >>>> Sebastian,
> >>>>
> >>>> there was little traffic on this list so far, yet your message went
> >>>> unnoticed. Sorry for that!
> >>>>
> >>>> You can view the webpage for your data as an (X)HTML representation of
> >>>> RDF. It can be completely data-driven and simply a function that
> >>>> transforms the RDF response into an (X)THML response. You could do it
> >>>> in a number of ways: using an imperative language such as Java, or
> >>>> various template engines.
> >>>>
> >>>> From a declarative perspective, XSLT (2.0) is probably the best tool
> >>>> for this. The RDF/XML syntax is admittedly not straightforward, but
> >>>> XSLT is a functional and Turing-complete language that can handle it
> >>>> effectively and transform it to (X)HTML or other formats.
> >>>> Graphity Client includes a number of predefined generic XSLT
> >>>> stylesheets that can be imported, extended, customized etc:
> >>>>
> >>>>
> https://github.com/Graphity/graphity-client/tree/master/src/main/webapp/static/org/graphity/client/xsl
> >>>>
> >>>> The facets part is more complicated. If you are using SPARQL to
> >>>> retrieve RDF results, you can modify the query on the fly, to narrow
> >>>> down matches based on the facet selections. That is what we did in
> >>>> this project: http://dedanskeaviser.dk/newspapers
> >>>> If not, then it depends on the API you are using to access RDF.
> >>>>
> >>>> I am not familiar with DCI or its concept of actors/roles/contexts.
> >>>> Maybe it would be easier if you explained your goals. If you are
> >>>> looking for a functional implementation of data-driven applications
> >>>> that uses REST and HATEOS and builds interactive forms and other
> >>>> navigation items, it is exactly what we provide:
> >>>> http://graphityhq.com/technology
> >>>>
> >>>> We would be happy to this topic discuss further.
> >>>>
> >>>>
> >>>> Martynas
> >>>> graphityhq.com
> >>>>
> >>>> On Mon, Oct 12, 2015 at 10:26 PM, Sebastian Samaruga
> >>>> <cognescent at gmail.com> wrote:
> >>>> > Sorry if this is not the right list. I've came here from a reply in
> a
> >>>> > previous post.
> >>>> >
> >>>> > I don't know if this is something new, it just seems it was useful
> for
> >>>> > the
> >>>> > kind of interface I was looking for. I was just wondering what would
> >>>> > be the
> >>>> > best way to facilitate browsing and search in the application demo
> I'm
> >>>> > building that renders RDF / Semantic Web contents. I've figured out
> >>>> > there
> >>>> > must be a tree hierarchy of categories, roles and instances of data
> to
> >>>> > which
> >>>> > adhere the incoming data parsing so having a common denominator for
> >>>> > different input structures.
> >>>> >
> >>>> > Having this structures, browsing through the tree of data, an item
> >>>> > (leave or
> >>>> > node) could be 'picked up' as a facet. For example, if the scenario
> is
> >>>> > "Car
> >>>> > Rental" as a category, "Car Model" and "Rental City" as roles and
> many
> >>>> > models of cars and many cities as instances, what if I could pick a
> >>>> > car
> >>>> > model, a city or both and press "Aggregate" and this resulting in
> root
> >>>> > categories for each specific car rental ("Car Rental 1", "Car Rental
> >>>> > 2",
> >>>> > etc) with its roles populated with the corresponding criteria values
> >>>> > (the
> >>>> > city corresponding to "Car Rental 1" given its car, etc).
> >>>> >
> >>>> > Maybe this sounds dumb. But the question is: how difficult would be
> to
> >>>> > build
> >>>> > such a filter criteria using only RDF datasources. RDF statement
> >>>> > resources
> >>>> > are not individualized by their occurrences. An RDF resource is the
> >>>> > same
> >>>> > regardless which statements it occurs. And, although I've found a
> way
> >>>> > to
> >>>> > individualize occurrences of, for example, Car Rental(s), I can't
> find
> >>>> > the
> >>>> > way yet to correlate this instances with the instances of their
> roles.
> >>>> >
> >>>> > Also, I'm restricting my mappings (ontology processing output) to
> >>>> > three
> >>>> > levels depth, which seems arbitrary. I could not restrict the graph
> to
> >>>> > any
> >>>> > depth. But I'll keep trying a while with this arrangements. It seems
> >>>> > attractive the correlation of categories, roles and instances with
> >>>> > some of
> >>>> > the concepts in the DCI programming model
> >>>> > (https://en.wikipedia.org/wiki/Data,_context_and_interaction) which
> >>>> > could
> >>>> > allow for a model driven focused approach of building the client
> >>>> > application, again with the ontology 'common factors' concept in
> mind.
> >>>> >
> >>>> > And the concept of actors, roles and contexts seems as an ideal case
> >>>> > for a
> >>>> > functional language implementation of what could be a runtime
> >>>> > environment
> >>>> > for data driven applications (think of a REST-HATEOAS client that
> >>>> > interactively builds forms and other navigation items simply by
> >>>> > content
> >>>> > negotiation with a functional data-state-based endpoint).
> >>>> >
> >>>> > Source, examples and the demo web application are available in the
> >>>> > project
> >>>> > web page:
> >>>> >
> >>>> > http://cognescent.blogspot.com
> >>>> > http://sourceforge.net/projects/cognescent/
> >>>> >
> >>>> > Regards,
> >>>> > Sebastian
> >>>
> >>>
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spline.inf.fu-berlin.de/pipermail/pragmaticweb/attachments/20151227/2e89ac63/attachment-0001.html>
More information about the PragmaticWeb
mailing list