Skip to content

ingest_json

This module contains classes to perform JSON-LD uplifting operations, facilitating the conversion of standard JSON into JSON-LD.

JSON-LD uplifting is done in 4 steps:

  • Input filter pre-processing (e.g., csv). This step is optional.
  • Initial transformation using jq expressions (transform).
  • Class annotation (adding @type to the root object and/or to specific nodes, using jsonpath-ng expressions) (types).
  • Injecting custom JSON-LD @context either globally or inside specific nodes (using jsonpath-ng expressions (context).

The details for each of these operations are declared inside context definition files, which are YAML documents containing specifications for the uplift workflow. For each input JSON file, its corresponding YAML context definition is detected at runtime:

  1. A domain configuration can be used, which is a JSON (or YAML) document that defines JSON file to context definition mappings.
  2. If no registry is used or the input file is not in the registry, a file with the same name but .yml extension will be used, if it exists.
  3. Otherwise, a _json-context.yml file in the same directory will be used, if it exists.

If no context definition file is found after performing the previous 3 steps, then the file will be skipped.

filenames_from_context(context_fn, domain_config)

Tries to find a JSON/JSON-LD file from a given YAML context definition filename. Priority: 1. Context file with same name as JSON doc (e.g. test.yml/test.json) 2. Context file in domain configuration (if one provided) 3. Context file in directory (_json-context.yml or _json-context.yaml)

Parameters:

Name Type Description Default
context_fn Path | str

YAML context definition filename

required
domain_config DomainConfiguration | None

dict of jsonFile:yamlContextFile mappings

required

Returns:

Type Description
list[Path]

corresponding JSON/JSON-LD filename, if found

Source code in ogc/na/ingest_json.py
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
def filenames_from_context(context_fn: Path | str,
                           domain_config: DomainConfiguration | None) -> list[Path]:
    """
    Tries to find a JSON/JSON-LD file from a given YAML context definition filename.
    Priority:
      1. Context file with same name as JSON doc (e.g. test.yml/test.json)
      2. Context file in domain configuration (if one provided)
      3. Context file in directory (_json-context.yml or _json-context.yaml)
    :param context_fn: YAML context definition filename
    :param domain_config: dict of jsonFile:yamlContextFile mappings
    :return: corresponding JSON/JSON-LD filename, if found
    """

    result = set()

    if not isinstance(context_fn, Path):
        context_fn = Path(context_fn)

    # 1. Lookup by matching filename
    if re.match(r'.*\.json-?(ld)?$', context_fn.stem):
        # If removing extension results in a JSON/JSON-LD
        # filename, try it
        json_fn = context_fn.with_suffix('')
        if json_fn.is_file():
            result.add(json_fn)
    # Otherwise check with appended JSON/JSON-LD extensions
    for suffix in ('.json', '.jsonld', '.json-ld'):
        json_fn = context_fn.with_suffix(suffix)
        if json_fn.is_file():
            result.add(json_fn)

    # 2. Reverse lookup in registry
    if domain_config:
        result.update(domain_config.uplift_entries.find_files_by_context_fn(context_fn))

    # 3. If directory context file, all .json files in directory
    # NOTE: no .jsonld or .json-ld files, since those could come
    #   from the output of this very script
    # NOTE: excluding those files present in the registry
    if context_fn.stem == '_json-context':
        with scandir(context_fn.parent) as it:
            return [x.path for x in cast(it, Iterable)
                    if x.is_file() and x.name.endswith('.json')]

    return list(result)

find_contexts(filename, domain_config=None)

Find the YAML context file for a given filename, with the following precedence: 1. Search in registry (if provided) 2. Search file with same base name but with yaml/yml or "-uplift.yml" extension. 3. Find _json-context.yml/yaml file in same directory

Parameters:

Name Type Description Default
filename Path | str

the filename for which to find the context

required
domain_config DomainConfiguration | None

an optional filename:yamlContextFile mapping

None

Returns:

Type Description
list[Path | str] | None

the YAML context definition paths (Path) and/or profile URIs (str)

Source code in ogc/na/ingest_json.py
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
def find_contexts(filename: Path | str,
                  domain_config: DomainConfiguration | None = None) -> list[Path | str] | None:
    """
    Find the YAML context file for a given filename, with the following precedence:
        1. Search in registry (if provided)
        2. Search file with same base name but with yaml/yml or "-uplift.yml" extension.
        3. Find _json-context.yml/yaml file in same directory
    :param filename: the filename for which to find the context
    :param domain_config: an optional filename:yamlContextFile mapping
    :return: the YAML context definition paths (Path) and/or profile URIs (str)
    """

    if not isinstance(filename, Path):
        filename = Path(filename)

    # 1. Registry lookup
    if domain_config:
        entry: UpliftConfigurationEntry = domain_config.uplift_entries.find_entry_for_file(filename)
        if entry:
            return entry.uplift_definitions

    # 2. Same filename with yml/yaml extension or autodetect in dir
    for context_path in (
        filename.with_name(filename.stem + '-uplift.yml'),
        filename.with_name(filename.stem + '-uplift.yaml'),
        filename.with_suffix('.yml'),
        filename.with_suffix('.yaml'),
        filename.with_suffix('').with_suffix('.yml'),
        filename.with_suffix('').with_suffix('.yaml'),
        filename.parent / '_json-context.yml',
        filename.parent / '_json-context.yaml',
    ):
        if filename == context_path:
            continue
        if context_path.is_file() and not (filename.suffix == '.jsonld' and filename.with_suffix('.json').is_file()):
            logger.info(f'Autodetected context {context_path} for file {filename}')
            return [context_path]

generate_graph(input_data, context=None, base=None, fetch_url_whitelist=None, transform_args=None)

Create a graph from an input JSON document and a YAML context definition file.

Parameters:

Name Type Description Default
input_data dict | list

input JSON data in dict or list format

required
context dict[str, Any] | Sequence[dict]

context definition in dict format, or list thereof

None
base str | None

base URI for JSON-LD context

None
fetch_url_whitelist Sequence[str] | bool | None

list of regular expressions to filter referenced JSON-LD context URLs before retrieving them. If None, it will not be used; if empty sequence or False, remote fetching operations will throw an exception.

None
transform_args dict | None

Additional arguments to pass as variables to the jq transform

None

Returns:

Type Description
UpliftResult

a tuple with the resulting RDFLib Graph and the JSON-LD enriched file name

Source code in ogc/na/ingest_json.py
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
def generate_graph(input_data: dict | list,
                   context: dict[str, Any] | Sequence[dict] = None,
                   base: str | None = None,
                   fetch_url_whitelist: Sequence[str] | bool | None = None,
                   transform_args: dict | None = None) -> UpliftResult:
    """
    Create a graph from an input JSON document and a YAML context definition file.

    :param input_data: input JSON data in dict or list format
    :param context: context definition in dict format, or list thereof
    :param base: base URI for JSON-LD context
    :param fetch_url_whitelist: list of regular expressions to filter referenced JSON-LD context URLs before
        retrieving them. If None, it will not be used; if empty sequence or False, remote fetching operations will
        throw an exception.
    :param transform_args: Additional arguments to pass as variables to the jq transform
    :return: a tuple with the resulting RDFLib Graph and the JSON-LD enriched file name
    """

    if not isinstance(input_data, dict) and not isinstance(input_data, list):
        raise ValueError('input_data must be a list or dictionary')

    g = Graph()
    jdoc_ld = input_data
    if context:
        base_uri = None
        for prefix, ns in DEFAULT_NAMESPACES.items():
            g.bind(prefix, Namespace(ns))

        context_list = context if isinstance(context, Sequence) else (context,)
        for context_entry in context_list:
            base_uri = context_entry.get('base-uri', base_uri)
            jdoc_ld = uplift_json(input_data, context_entry,
                                  transform_args=transform_args)
            if 'context' in context_entry:
                if '$' in context_entry['context']:
                    root_ctx = context_entry['context']['$']
                elif '.' in context_entry['context']:
                    root_ctx = context_entry['context']['.']
                else:
                    continue

                if isinstance(root_ctx, dict):
                    for term, term_val in root_ctx.items():
                        if not term.startswith('@') \
                                and isinstance(term_val, str) \
                                and re.match(r'.+[#/:]$', term_val) \
                                and is_iri(term_val):
                            g.bind(term, term_val)

        if not base:
            if base_uri:
                base = context['base-uri']
            elif '@context' in jdoc_ld:
                # Try to extract from @context
                # If it is a list, iterate until @base is found
                base = None
                if isinstance(jdoc_ld['@context'], list):
                    for entry in jdoc_ld['@context']:
                        if not isinstance(entry, dict):
                            continue
                        base = entry.get('@base')
                        if base:
                            break
                else:
                    # If not a list, just look @base up
                    base = jdoc_ld['@context'].get('@base')
        if logger.isEnabledFor(logging.DEBUG):
            logger.debug('Uplifted JSON:\n%s', json.dumps(jdoc_ld, indent=2))

    def remote_context_url_filter(url_whitelist: str | list[str], url: str):
        if url_whitelist is False:
            return False
        if url_whitelist is True or url_whitelist is None:
            return True
        if isinstance(url_whitelist, str):
            url_whitelist = re.compile(url_whitelist)
        else:
            url_whitelist = [re.compile(x) for x in url_whitelist if x]
        return any(re.match(r, url) for r in url_whitelist)

    g.parse(data=json.dumps(jdoc_ld), format='json-ld', base=base,
            remote_context_url_filter=functools.partial(remote_context_url_filter, fetch_url_whitelist))

    return UpliftResult(graph=g, uplifted_json=jdoc_ld)

process(input_files, domain_cfg=None, context_fn=None, jsonld_fn=False, ttl_fn=False, batch=False, base=None, skip_on_missing_context=False, provenance_base_uri=None, fetch_url_whitelist=None, transform_args=None, file_filter=None)

Performs the JSON-LD uplift process.

Parameters:

Name Type Description Default
input_files str | Path | Sequence[str | Path]

list of input, plain JSON files

required
domain_cfg DomainConfiguration | None

domain configuration including uplift definition locations

None
context_fn str | Path | None

used to force the YAML context file name for the uplift. If None, it will be autodetected

None
jsonld_fn bool | str | Path | None

output file name for the JSON-LD content. If it is False, no JSON-LD output will be generated. If it is None, output will be written to stdout.

False
ttl_fn bool | str | Path | None

output file name for the Turtle RDF content. If it is False, no Turtle output will be generated. If it is None, output will be written to stdout.

False
batch bool

in batch mode, all JSON input files are obtained from the context registry and processed

False
base str

base URI to employ

None
skip_on_missing_context bool

whether to silently fail if no context file is found

False
provenance_base_uri Optional[Union[str, bool]]

base URI for provenance resources

None
fetch_url_whitelist Optional[Union[Sequence, bool]]

list of regular expressions to filter referenced JSON-LD context URLs before retrieving them. If None, it will not be used; if empty sequence or False, remote fetching operations will throw an exception

None
transform_args dict | None

Additional arguments to pass as variables to the jq transform

None
file_filter str | Pattern

Filename filter for input files

None

Returns:

Type Description
list[UpliftResult]

a list of JSON-LD and/or Turtle output files

Source code in ogc/na/ingest_json.py
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
def process(input_files: str | Path | Sequence[str | Path],
            domain_cfg: DomainConfiguration | None = None,
            context_fn: str | Path | None = None,
            jsonld_fn: bool | str | Path | None = False,
            ttl_fn: bool | str | Path | None = False,
            batch: bool = False,
            base: str = None,
            skip_on_missing_context: bool = False,
            provenance_base_uri: Optional[Union[str, bool]] = None,
            fetch_url_whitelist: Optional[Union[Sequence, bool]] = None,
            transform_args: dict | None = None,
            file_filter: str | re.Pattern = None) -> list[UpliftResult]:
    """
    Performs the JSON-LD uplift process.

    :param input_files: list of input, plain JSON files
    :param domain_cfg: domain configuration including uplift definition locations
    :param context_fn: used to force the YAML context file name for the uplift. If `None`,
           it will be autodetected
    :param jsonld_fn: output file name for the JSON-LD content. If it is `False`, no JSON-LD
           output will be generated. If it is `None`, output will be written to stdout.
    :param ttl_fn: output file name for the Turtle RDF content. If it is `False`, no Turtle
           output will be generated. If it is `None`, output will be written to stdout.
    :param batch: in batch mode, all JSON input files are obtained from the context registry
           and processed
    :param base: base URI to employ
    :param skip_on_missing_context: whether to silently fail if no context file is found
    :param provenance_base_uri: base URI for provenance resources
    :param fetch_url_whitelist: list of regular expressions to filter referenced JSON-LD context URLs before
        retrieving them. If None, it will not be used; if empty sequence or False, remote fetching operations will
        throw an exception
    :param transform_args: Additional arguments to pass as variables to the jq transform
    :param file_filter: Filename filter for input files
    :return: a list of JSON-LD and/or Turtle output files
    """
    result: list[UpliftResult] = []
    process_id = str(uuid.uuid4())
    workdir = Path()
    if isinstance(input_files, str) or not isinstance(input_files, Sequence):
        input_files = (input_files,)
    if batch:
        logger.info("Input files: %s", input_files)
        remaining_fn: deque = deque()
        for input_file in input_files:
            if isinstance(input_file, str):
                for x in filter(lambda x: x, input_file.split(',')):
                    if '?' in x or '#' in x:
                        remaining_fn.extend(workdir.glob(x))
                    else:
                        remaining_fn.append(x)
            else:
                remaining_fn.append(input_file)
        while remaining_fn:
            fn = str(remaining_fn.popleft())

            if not fn or not os.path.isfile(fn):
                continue

            if file_filter and not re.search(file_filter, fn):
                continue

            if re.match(r'.*\.ya?ml$', fn):
                # Check whether this is a context definition or a doc to uplift
                has_context = bool(find_contexts(fn, domain_cfg))

                if not has_context:
                    # Potential context file found, try to find corresponding JSON/JSON-LD file(s)
                    logger.info('Potential YAML context file found: %s', fn)
                    remaining_fn.extend(filenames_from_context(fn, domain_config=domain_cfg) or [])
                    continue

            logger.info('File %s matches, processing', fn)
            try:
                result.append(process_file(
                    fn,
                    jsonld_fn=False if jsonld_fn is False else None,
                    ttl_fn=False if ttl_fn is False else None,
                    context_fn=None,
                    base=base,
                    provenance_base_uri=provenance_base_uri,
                    provenance_process_id=process_id,
                    fetch_url_whitelist=fetch_url_whitelist,
                    domain_cfg=domain_cfg,
                    transform_args=transform_args,
                ))
            except MissingContextException as e:
                if skip_on_missing_context or batch:
                    logger.warning("Error processing JSON/JSON-LD file, skipping: %s", getattr(e, 'msg', str(e)))
                else:
                    raise
    else:
        for input_file in input_files:
            try:
                result.append(process_file(
                    input_file,
                    jsonld_fn=jsonld_fn if jsonld_fn is not None else '-',
                    ttl_fn=ttl_fn if ttl_fn is not None else '-',
                    context_fn=context_fn,
                    base=base,
                    provenance_base_uri=provenance_base_uri,
                    provenance_process_id=process_id,
                    fetch_url_whitelist=fetch_url_whitelist,
                    domain_cfg=domain_cfg,
                    transform_args=transform_args,
                ))
            except Exception as e:
                if skip_on_missing_context:
                    logger.warning("Error processing JSON/JSON-LD file, skipping: %s", getattr(e, 'msg', str(e)))
                else:
                    raise

    return result

process_file(input_fn, jsonld_fn=False, ttl_fn=False, context_fn=None, domain_cfg=None, base=None, provenance_base_uri=None, provenance_process_id=None, fetch_url_whitelist=None, transform_args=None)

Process input file and generate output RDF files.

Parameters:

Name Type Description Default
input_fn str | Path

input filename

required
jsonld_fn str | Path | bool | None

output JSON-lD filename (None for automatic). If False, no JSON-LD output will be generated

False
ttl_fn str | Path | bool | None

output Turtle filename (None for automatic). If False, no Turtle output will be generated.

False
context_fn str | Path | Sequence[str | Path] | None

YAML context filename. If None, will be autodetected: 1. From a file with the same name but yml/yaml extension (test.json -> test.yml) 2. From the domain_cfg 3. From a _json-context.yml/_json-context.yaml file in the same directory

None
domain_cfg DomainConfiguration | None

domain configuration with uplift definition locations

None
base str | None

base URI for JSON-LD

None
provenance_base_uri str | bool | None

base URI for provenance resources

None
provenance_process_id str | None

process identifier for provenance tracking

None
fetch_url_whitelist bool | Sequence[str] | None

list of regular expressions to filter referenced JSON-LD context URLs before retrieving them. If None, it will not be used; if empty sequence or False, remote fetching operations will throw an exception

None
transform_args dict | None

Additional arguments to pass as variables to the jq transform

None

Returns:

Type Description
UpliftResult | None

List of output files created

Source code in ogc/na/ingest_json.py
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
def process_file(input_fn: str | Path,
                 jsonld_fn: str | Path | bool | None = False,
                 ttl_fn: str | Path | bool | None = False,
                 context_fn: str | Path | Sequence[str | Path] | None = None,
                 domain_cfg: DomainConfiguration | None = None,
                 base: str | None = None,
                 provenance_base_uri: str | bool | None = None,
                 provenance_process_id: str | None = None,
                 fetch_url_whitelist: bool | Sequence[str] | None = None,
                 transform_args: dict | None = None) -> UpliftResult | None:
    """
    Process input file and generate output RDF files.

    :param input_fn: input filename
    :param jsonld_fn: output JSON-lD filename (None for automatic).
        If False, no JSON-LD output will be generated
    :param ttl_fn: output Turtle filename (None for automatic).
        If False, no Turtle output will be generated.
    :param context_fn: YAML context filename. If None, will be autodetected:
        1. From a file with the same name but yml/yaml extension (test.json -> test.yml)
        2. From the domain_cfg
        3. From a _json-context.yml/_json-context.yaml file in the same directory
    :param domain_cfg: domain configuration with uplift definition locations
    :param base: base URI for JSON-LD
    :param provenance_base_uri: base URI for provenance resources
    :param provenance_process_id: process identifier for provenance tracking
    :param fetch_url_whitelist: list of regular expressions to filter referenced JSON-LD context URLs before
        retrieving them. If None, it will not be used; if empty sequence or False, remote fetching operations will
        throw an exception
    :param transform_args: Additional arguments to pass as variables to the jq transform
    :return: List of output files created
    """

    start_time = datetime.now()

    if not isinstance(input_fn, Path):
        input_fn = Path(input_fn)

    if not input_fn.is_file():
        raise IOError(f'Input is not a file ({input_fn})')

    contexts = []
    provenance_contexts = []
    if not context_fn:
        for found_context in (find_contexts(input_fn, domain_config=domain_cfg) or ()):
            if isinstance(found_context, Path):
                contexts.append(util.load_yaml(filename=found_context))
            else:
                # Profile URI
                artifact_urls = domain_cfg.profile_registry.get_artifacts(found_context, profile.ROLE_SEMANTIC_UPLIFT)
                if artifact_urls:
                    for a in artifact_urls:
                        contexts.append(util.load_yaml(a))
                        provenance_contexts.append(a)

    elif not isinstance(context_fn, Sequence) or isinstance(context_fn, str):
        provenance_contexts = (context_fn,)
        contexts = (util.load_yaml(context_fn),)
    else:
        provenance_contexts = context_fn
        contexts = [util.load_yaml(fn) for fn in context_fn]

    if not contexts:
        raise MissingContextException('No context file provided and one could not be discovered automatically')

    # Apply input filter of first context only (if any)
    input_filters = contexts[0].get('input-filter')
    if input_filters:
        if not isinstance(input_filters, dict):
            raise ValueError('input-filter must be an object')
        input_data = apply_input_filter(input_fn, input_filters)
    else:
        # Accept both JSON and YAML
        input_data = util.load_yaml(input_fn)

    if logger.isEnabledFor(logging.DEBUG):
        logger.debug('Input data:\n%s', json.dumps(input_data, indent=2))

    provenance_metadata: ProvenanceMetadata | None = None
    if provenance_base_uri is not False:
        used = [FileProvenanceMetadata(filename=input_fn, mime_type='application/json')]
        used.extend(FileProvenanceMetadata(filename=c, mime_type='application/yaml') for c in provenance_contexts)
        provenance_metadata = ProvenanceMetadata(
            used=used,
            batch_activity_id=provenance_process_id,
            base_uri=provenance_base_uri,
            root_directory=os.getcwd(),
            start=start_time,
            end_auto=True,
        )

    if transform_args is None:
        transform_args = {}
    transform_args['_filename'] = str(input_fn.resolve())
    transform_args['_basename'] = str(input_fn.name)
    transform_args['_dirname'] = str(input_fn.resolve().parent)
    transform_args['_relname'] = os.path.relpath(input_fn)

    if not base:
        base = str(input_fn)

    uplift_result = generate_graph(input_data,
                                   context=contexts,
                                   base=base,
                                   fetch_url_whitelist=fetch_url_whitelist,
                                   transform_args=transform_args)

    uplift_result.input_file = input_fn

    # False = do not generate
    # None = auto filename
    # - = stdout
    if ttl_fn is not False:
        if ttl_fn == '-':
            if provenance_metadata:
                provenance_metadata.output = FileProvenanceMetadata(mime_type='text/turtle', use_bnode=False)
                generate_provenance(uplift_result.graph, provenance_metadata)
            print(uplift_result.graph.serialize(format='ttl'))
        else:
            if not ttl_fn:
                ttl_fn = input_fn.with_suffix('.ttl') \
                    if input_fn.suffix != '.ttl' \
                    else input_fn.with_suffix(input_fn.suffix + '.ttl')
            if provenance_metadata:
                provenance_metadata.output = FileProvenanceMetadata(filename=ttl_fn,
                                                                    mime_type='text/turtle',
                                                                    use_bnode=False)
                generate_provenance(uplift_result.graph, provenance_metadata)
            uplift_result.graph.serialize(destination=ttl_fn, format='ttl')
            uplift_result.output_files.append(ttl_fn)

    # False = do not generate
    # None = auto filename
    # "-" = stdout
    if jsonld_fn is not False:
        if jsonld_fn == '-':
            print(json.dumps(uplift_result.uplifted_json, indent=2))
        else:
            if not jsonld_fn:
                jsonld_fn = input_fn.with_suffix('.jsonld') \
                    if input_fn.suffix != '.jsonld' \
                    else input_fn.with_suffix(input_fn.suffix + '.jsonld')

            with open(jsonld_fn, 'w') as f:
                json.dump(uplift_result.uplifted_json, f, indent=2)
            uplift_result.output_files.append(jsonld_fn)

    return uplift_result

uplift_json(data, context, fetch_url_whitelist=None, transform_args=None)

Transform a JSON document loaded in a dict, and embed JSON-LD context into it.

WARNING: This function modifies the input dict. If that is not desired, make a copy before invoking.

Parameters:

Name Type Description Default
data dict | list

the JSON document in dict format

required
context dict

YAML context definition

required
fetch_url_whitelist Optional[Union[Sequence, bool]]

list of regular expressions to filter referenced JSON-LD context URLs before retrieving them. If None, it will not be used; if empty sequence or False, remote fetching operations will throw an exception.

None
transform_args dict | None

Additional arguments to pass as variables to the jq transform

None

Returns:

Type Description
dict

the transformed and JSON-LD-enriched data

Source code in ogc/na/ingest_json.py
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
def uplift_json(data: dict | list, context: dict,
                fetch_url_whitelist: Optional[Union[Sequence, bool]] = None,
                transform_args: dict | None = None) -> dict:
    """
    Transform a JSON document loaded in a dict, and embed JSON-LD context into it.

    WARNING: This function modifies the input dict. If that is not desired, make a copy
    before invoking.

    :param data: the JSON document in dict format
    :param context: YAML context definition
    :param fetch_url_whitelist: list of regular expressions to filter referenced JSON-LD context URLs before
        retrieving them. If None, it will not be used; if empty sequence or False, remote fetching operations will
        throw an exception.
    :param transform_args: Additional arguments to pass as variables to the jq transform
    :return: the transformed and JSON-LD-enriched data
    """

    context_position = context.get('position', 'before')

    validate_context(context, transform_args=transform_args)

    # Check whether @graph scoping is necessary for transformations and paths
    scoped_graph = context.get('scope', 'graph') == 'graph' and '@graph' in data
    data_graph = data['@graph'] if scoped_graph else data

    # Check if pre-transform necessary
    transform = context.get('transform')
    if transform:
        # Allow for transform lists to do sequential transformations
        if isinstance(transform, str):
            transform = (transform,)
        for i, t in enumerate(transform):
            tranformed_txt = jq.compile(t, args=transform_args).input(data_graph).text()
            if logger.isEnabledFor(logging.DEBUG):
                logger.debug('After transform %d:\n%s', i + 1, tranformed_txt)
            data_graph = json.loads(tranformed_txt)

    # Add types
    types = context.get('types', {})
    for loc, type_list in types.items():
        items = json_path_parse(loc).find(data_graph)
        if isinstance(type_list, str):
            type_list = [type_list]
        for item in items:
            existing = item.value.setdefault('@type', [])
            if isinstance(existing, str):
                item.value['@type'] = [existing] + type_list
            else:
                item.value['@type'].extend(type_list)
            item_types = item.value.get('@type')
            if not item_types:
                item.value.pop('@type', None)
            elif isinstance(item_types, Sequence) and not isinstance(item_types, str) and len(item_types) == 1:
                item.value['@type'] = item_types[0]

    # Add contexts
    context_list = context.get('context', {})
    global_context = None
    for loc, val in context_list.items():
        if not loc or loc in ['.', '$']:
            global_context = val
        else:
            items = json_path_parse(loc).find(data_graph)
            for item in items:
                item.value['@context'] = _get_injected_context(item.value, val, context_position)

    if isinstance(data_graph, dict):
        data_context = data_graph.pop('@context', None)
        if data_context:
            if not global_context:
                global_context = data_context
            elif isinstance(global_context, list):
                global_context.extend(data_context)
            else:
                global_context = [data_context, global_context]

    if (global_context and not isinstance(data_graph, dict)) or scoped_graph:
        return {
            '@context': _get_injected_context(data, global_context, context_position),
            '@graph': data_graph,
        }
    else:
        if global_context:
            return {
                '@context': _get_injected_context(data, global_context, context_position),
                **data_graph
            }
        return data_graph