[PragmaticWeb] Which semantics?

Sebastian Samaruga ssamarug at gmail.com
Mon Feb 27 05:35:19 CET 2017


Yes, sure you are right. Maybe I mis explained the fact that I'd like to
consume 'standard' RDF, from the Web, Linked Data endpoints, SPARQL
endpoints and data converted to RDF from data sources (relational, etc).

Then I'd like to do 'internal' processing in a way that (here) not
resembles 'conventional' SW techniques (this is what I'm looking feedback
for and is what the documents are about) and then, yes, have the results
exposed in the previously mentioned formats / protocols (standards) and
many others (ie.: SOAP) via different 'Ports'.

Maybe I'm trying to explain the implementation details of a 'database
engine' to someone only interested in SQL queries. I think I pointed that
before.

Best,
Sebastián.


On Feb 27, 2017 1:14 AM, "Pat Hayes" <phayes at ihmc.us> wrote:

Thanks for clarifying, but I think you are misguided. You want to “use RDF"
but you don’t want to pay any attention to the RDF specifications, do I
have that right? If so, you are not using RDF, but mis-using it. If you
proceed, I would strongly suggest that you do not use any RDF terminology
to describe what you are doing, or RDF tools to process whatever it is you
finish up building. The entire point and purpose of RDF is to provide a
standard notation for information exchange on the Web. If you use the RDF
data model without also conforming to the RDF standards (including the RDF
semantics, which is a normative part of the standard) then you will not be
contributing to this global effort, and whatever you finish up doing will
not be usable as RDF, will probably break if processed by RDF engines, and
will cause only pain and confusion. (An analogy might be to suggest using
HTML syntax as a musical notation, ignoring what browsers do with it.)

You say you are a SW beginner, but surely the first thing a beginner should
do is to learn how to use the basic tools (in the SW case, RDF - or JSON-LD
if you prefer - and linked data conventions) rather than deliberately
ignoring the operation manual and setting out to mis-use them.

You say that what you are doing is “not just RDF”. You mis-speak, It is not
RDF at all.

Best wishes

Pat Hayes


On Feb 25, 2017, at 1:45 AM, Sebastian Samaruga <ssamarug at gmail.com> wrote:

I'll repost this message again because it seems that maybe I was not clear
enough or misunderstood. I don't want to reinvent anything, not even
temporal logic or ontology merging. What I'd like is to have all this
implemented, struggling the RDF data model in ways not conventional at all
except by using it as a kind of universal datasource format and being able
to expose ontologies but, internally, performing sorting, merge and many
kinds of augmentation using BI / Big Data learning, analysis and mining
techniques, not only 'pure' RDF / OWL inference.

Note that the document(s) included are just an enumeration of topics of
what the aspects of a model built for this purpose could look like.

First, thanks everybody for your interest in helping me. I'll be reviewing
each comment and links. The question is, I'm a SW beginner but maybe I was
not expressing me correctly.

I'm not trying to do 'canonical' Semantic Web, Linked Data not even RDF. I
just take RDF quads as a simple serialization mechanism and try to build
upon that what a 'relational model' could be for a traditional database.

For this I encode lots of metadata regarding the original input 'RDF' (this
is because any data source data I'll use can be encoded as RDF) and arrange
metamodels which aggregate and infer knowledge about original data.

I try to merge and align equivalent resources and statements, perform
resource relationships retrieval and comparation and finally I try to sort
resources and statements logically and temporally.

For this I aggregate statement resources into levels and classify those
resources using my own kind of type inference.

Templates is a concept I use for 'a graph of resources, statements, with
types, variables and wildcards also as resources. A Template applied to
another resource (Template) may shield another graph.

Over Templates I build a 'protocol'. Each request/response is a Template
(graph) and dialog finishes when each part 'completes' or resolves its
corresponding types, variables and wildcards.

Having a 'protocol' each metamodel layer narrows itself into Facades,
another (sub) graph of each level regarding level's kind of constructions
(facts, objects, purposes: anOrder, orders, order management).

All the 'constructions' I've made are not (and not even pretends to be)
Semantic Web or RDF standards based. If you read the document please bear
this in mind.

There I've just laid down spare propositions of 'parts' of this 'relational
model' encoded in metadata which, in turn, encodes the layers of models
using just three classes.

It's funny that I've used RDF as a kind of universal input translation
mechanism and that it inspired the contexts, kinds and metamodel
abstractions somehow. Mostly when I recall almost everyone on these lists
telling me I'm wrong, perhaps because I didn't warn them this is something
else, not just RDF.

So, before being misunderstood, I'll share this link which is not a paper,
specification, documentation, draft or even complete at all. It is just a
TOC, a raw placeholder where the blanks could be filled and corrections be
made.

Available for comments at:
https://docs.google.com/document/d/1VrvIV4rXyUyGl5jvDAgsEV4G
RxX_NvMICgkdnbvj9DQ/edit?usp=drivesdk

The document just does enumerate concepts, some mentioned above. It starts
describing the resource data/meta models. It is just titles and a brief
explanation. Sorry for posting a so very early 'draft' (dump) but I'm not
willing to publish a document but to build a set of 'patterns' for
modelling and looking for 'constructive' feedback.

I'll attach a previous document as a reference for things I mention in the
Google doc and that perhaps are not so well defined. The Google doc, once
finished, will reflect more coherently all concepts regarding the proposal.

Thanks,
Sebastián.


On Feb 18, 2017 3:01 AM, "Pat Hayes" <phayes at ihmc.us> wrote:

>
> On Feb 15, 2017, at 5:43 AM, Sebastian Samaruga <ssamarug at gmail.com>
> wrote:
>
> OK. But sorry again for my lack of knowledge but does this mean that
> 'semantic' inference of the kind of 'inferring' that:
>
> http://somedomain.net/people/John
> (is the same as)
> http://anotherdomain.com/staff/Juan
>
> is not possible without resorting in previous knowledge
>
>
> If I understand you, indeed it is not possible without previous knowledge;
> but not of dictionaries, but rather of identities. Quite a lot of the
> information stored in RDF is exactly of this form, expressed using the
> relational URI owl:sameAs, in the form of an RDF triple.
>
> or dictionaries or, even worst, NLP over those URIs? Not even to mention
> 'inferring' identity between 'The capital of France' and 'Paris' or 100cm /
> 1meter.
>
>
> All such identities must be expressed directly or indirectly in some
> ontology, probably in the form of an RDF graph. For the first example it
> might be more useful to simply say that Paris (Subject) is the capital city
> of (Relation) France (Object) in a single triple, rather than use
> owl:sameAs. Identity of measure terms has been tackled by ontologies of
> units and measurements, such as OM (http://www.wurvoc.org/vocabul
> aries/om-1.6/)
>
>
> Another kind of inference that simply concatenating datasets just not
> solve is that of 'ordering':
>
> Joe takes his car out.
> Joe washes his car.
> Joe takes his car in.
>
> How if the statements comes in any order one could reason about the
> correct sequence. This will be indispensable for propositional like logic
> and inference.
>
>
> Actually not (indispensable, that is). Logics typically do not use the
> ordering of statements to encode content. (There is a good reason for this,
> having to do with how logical inference rules can be stated, but it would
> take too long to explain.)  To express meaningful orderings, especially
> time orderings (as here), one must describe the ordering relations
> explicitly. There is a huge literature on how to use logics to express such
> things, far more than I can summarize here, but one way to proceed would be
> to describe things one might call ‘events’ (takings in and out, washings,
> etc.), classify them into types or categories (corresponding roughly to the
> English verb) and associate them using ontological relations to times (to
> get the orderings), subjects, objects, and possibly such things as
> locations (where it happened) and reasons (why it happened). All of this
> can be directly written as RDF graphs and reasoned about using RDF or OWL
> reasoning engines.
>
> All of this has been thought about very hard by a very large number of
> people for about 60 years now. You don’t want to try to re-invent it all,
> would be my advice.
>
> Best wishes
>
> Pat Hayes
>
>
> Best,
> Sebastián.
>
>
>
>
> On Feb 14, 2017 4:20 PM, "Martynas Jusevičius" <martynas at graphity.org>
> wrote:
>
>> Sebastian,
>>
>> I think it is useful to think about the merge operation between datasets.
>>
>> Here I mean a "physical" merge, where records with the same
>> identifiers become augmented with more data, when multiple datasets
>> are merged together. A "logical", or "semantic" merge, with vocabulary
>> mappings etc., comes on top of that.
>>
>> So if you take the relational or XML models, there is no generic way
>> to do that. With RDF, there is: you simply concatenate the datasets,
>> because they have a stable structure (triples) and built-in global
>> identifiers (URIs).
>>
>> That said, you should try approaching things from another end: start
>> building a small but concrete solution and solve problems one by one,
>> instead of overthinking/reinventing the top-down architecture. Until
>> you do that, you will probably not get relevant advice on these
>> mailing lists.
>>
>> On Tue, Feb 14, 2017 at 6:21 PM, Sebastian Samaruga <ssamarug at gmail.com>
>> wrote:
>> > Sorry for me being so ignorant. But what could be called 'semantic' (in
>> the
>> > sense of 'meaning', I suppose) for the current frameworks, at least the
>> > couple I know, available for ontologies of some kind if they could
>> assert
>> > between their instances which statements and resources are equivalent
>> (being
>> > them in a different language/encoding or different 'contextual' terms
>> for
>> > the same subjects for example).
>> >
>> > Another important lack of 'semantics' is ordering (temporal or
>> whatsoever)
>> > where a statement or resource should be treated at least in relation to
>> > their previous or following elements.
>> >
>> > If my last posts where so blurry is because I try to address some of
>> this
>> > issues, besides others, trying no to fall in the promise that adhering
>> to
>> > one format will free us all of any interoperability hassles. Remember a
>> > similar promise from XML: "All we have to do is share DTDs and
>> > interoperate". I'll still trying to give the format a twist (RDF Quads)
>> but
>> > I'll publish a Google Document open for comments.
>> >
>> > Best,
>> > Sebastián.
>> >
>>
>
> ------------------------------------------------------------
> IHMC                                     (850)434 8903 home
> 40 South Alcaniz St.            (850)202 4416   office
> Pensacola                            (850)202 4440   fax
> FL 32502                              (850)291 0667   mobile (preferred)
> phayes at ihmc.us       http://www.ihmc.us/users/phayes
>
>
>
>
>
> <Datastore.pdf><Notes.pdf>


------------------------------------------------------------
IHMC                                     (850)434 8903 home
40 South Alcaniz St.            (850)202 4416   office
Pensacola                            (850)202 4440   fax
FL 32502                              (850)291 0667   mobile (preferred)
phayes at ihmc.us       http://www.ihmc.us/users/phayes
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.spline.inf.fu-berlin.de/pipermail/pragmaticweb/attachments/20170227/6e0074d6/attachment-0001.html>


More information about the PragmaticWeb mailing list