wpu.nu

Skillnad mellan versioner av "VariableNaming"

Från wpu.nu

m
m
 
(17 mellanliggande versioner av samma användare visas inte)
Rad 1: Rad 1:
 
__NOTOC__
 
__NOTOC__
 
 
===DOCUMENT STANDARD===
 
===DOCUMENT STANDARD===
   
 
 
 
====NAMING====
 
====NAMING====
 
 
Documents are stored as PDF with a filename of the format:
 
Documents are stored as PDF with a filename of the format:
 
[document_md5sum].pdf
 
[document_md5sum].pdf
Rad 25: Rad 21:
 
When the file has been generated, a checksum is calculated and
 
When the file has been generated, a checksum is calculated and
 
the file is renamed.
 
the file is renamed.
 
 
  
 
===ATTRIBUTES===
 
===ATTRIBUTES===
Rad 36: Rad 30:
 
====GENERAL====
 
====GENERAL====
  
* <code>Type</code>: [interrogation, promemoria, confrontation, letter, etc]'
+
* <code>Type [interrogation, promemoria, confrontation, letter, etc]</code>
Title: A well formatted and descriptive title that will be the base name on the wiki
+
* <code>Name</code> A well formatted and descriptive name that will be the base name of the wiki page. If not overriden with NameeOverride, this is re-generated from document meta data on each access. Includes file extension (.pdf)
WikiTitle: Same as above but after wiki name formatting and folding
+
* <code>WikiName</code> Same as above but after wiki name formatting and folding
Checksum: MD5 checksum of the corresponding PDF-file
+
* <code>NameeOverride</code> If the name has been changed manually this variable contains the new name, else the empty string
SectionCode: A code refering to the ledger
+
* <code>Title</code> A short sub-title
InterrogationStart: Date and/or time of interrogation start or generation of document
+
* <code>OriginalFilename</code> Filename of the downloaded or imported file as first seen
InterrogationEnd: Date and/or time of interrogation end
+
* <code>Checksum</code> MD5 checksum of the corresponding PDF-file
Language: sv
+
* <code>SectionCode</code> A code refering to the ledger
Interrogator: Name of interrogator
+
* <code>InterrogationStart</code> Date and/or time of interrogation start or generation of document
Interrogated: Name of interrogated
+
* <code>InterrogationEnd</code> Date and/or time of interrogation end
ShortSummary: A short summary displayed on the wiki
+
* <code>PrintoutDate</code> Date of printout of document (used in some templates)
Author: Name of person generating a document if not the interrogator
+
* <code>Language</code> sv
Progress: Stage of document in wiki-terms (e.g. proofread, waiting for OCR)
+
* <code>Interrogator</code> Name of interrogator
Notes: Notes on the document, also used for OCR errors etc
+
* <code>Interrogated</code> Name of interrogated
Stage: The stage in the import process [OCR, waiting, downloaded, etc]
+
* <code>ShortSummary</code> A short summary displayed on the wiki
Release: True if doc should be deleted in the next iteration
+
* <code>Author</code> Name of person generating a document if not the interrogator
 +
* <code>Progress</code> Stage of document in wiki-terms (e.g. proofread, waiting for OCR)
 +
* <code>Notes</code> Notes on the document, also used for OCR errors etc
 +
* <code>Stage</code> The stage in the import process [OCR, waiting, downloaded, etc]
 +
* <code>Release</code> True if doc should be deleted from a list in the next iteration
  
 
====SPIDER====
 
====SPIDER====
  
FoundAt: Name of the page where the download link was found
+
* <code>FoundAt</code> Name of the page where the download link was found
FoundAtURL: The URL from which the download link was found (if any)
+
* <code>FoundAtURL</code> The URL from which the download link was found (if any)
DownloadURL: The URL from which the pdf was downloaded
+
* <code>DownloadURL</code> The URL from which the pdf was downloaded
DownloadURLText: Text tag from the above URL
+
* <code>DownloadURLText</code> Text tag from the above URL
DownloadDescription: A description found in association with the download URL
+
* <code>DownloadDescription</code> A description found in association with the download URL
DownloadFileName: The name of the file as downloaded
 
  
 
====IMPORT====
 
====IMPORT====
  
NumberOfPages: Number of pages contained in the corresponding PDF
+
* <code>NumberOfPages</code> Number of pages contained in the corresponding PDF
SizeHuman: PDF file size in human readable form incl units (str)
+
* <code>SizeBytes</code> PDF file size in bytes (int)
SizeBytes: PDF file size in bytes (int)
+
* <code>SlackDocShouldImportMsgTS</code> ID of slack-message asking about import
SlackDocShouldImportMsgTS: ID of slack-message asking about import
+
* <code>SlackDocShouldImportResponse</code> slack-response asking about import
SlackDocShouldImportResponseTS: ID of slack-response asking about import
 
  
 
====DERIVED====
 
====DERIVED====
  
PDFFilename: <checksum>.pdf
+
* <code>PDFFilename</code> <checksum>.pdf
  JSONFilename: <checksum>.json
+
* <code>JSONFilename</code> <checksum>.json
  Directory: <base_directory>/<Stage>/
+
* <code>PDFFilePath</code> <directory/<checksum>.pdf
  OCRCacheTestFile: Filename which if present should cause use of cached OCR results
+
* <code>JSONFilePath</code> <directory/<checksum>.json
WikiIndex: Index:<WikiTitle>
+
* <code>SizeHuman</code> PDF file size in human readable form incl units (str)
 +
* <code>Directory</code> <base_directory>/<Stage>/
 +
* <code>OCRCacheTestFile</code> Filename which if present should cause use of cached OCR results
 +
* <code>WikiIndex</code> Index:<WikiTitle>
 +
 
  
 
===LOG===
 
===LOG===
 
list of dict
 
list of dict
 
log[<date>] = {text:str,...}
 
log[<date>] = {text:str,...}
 +
 +
 +
===STAGES===
 +
 +
downloading -> downloaded -> ocr_queue -> ocr -> ask_slack -> import
 +
                                                    \------> waiting -> import
 +
==PROCESSES==
 +
 +
===MOPOCR2===
 +
Opens a checksum-named file with pdf and json, OCRs it, adds info in json and saves it back to output folder
 +
 +
 +
===wpu-spindeln===
 +
Crawls pages (incl the wiki), downloads and renames files. New files are shown to slack. If manual import is needet wpu-spindeln can split and import documents. Adds metadata to json

Nuvarande version från 18 juni 2021 kl. 16.05

DOCUMENT STANDARD

NAMING

Documents are stored as PDF with a filename of the format: [document_md5sum].pdf

Document metadata is saved as a json-file with a filename of the format: [document_md5sum].json in the same folder as it's corresponding pdf-file.

Exceptions: On wiki import the file is copied to the import folder with a descriptive filename, as that is what the corresponding wiki pages will be named.

On download the file is named to whatever name the file is given upon download, as a checksum cannot be calculated until the file is on disk. It is then imediately renamed according to the standard

On generation the file is named with a unique name (like parent doc + pages to include) When the file has been generated, a checksum is calculated and the file is renamed.

ATTRIBUTES

Some document attributes are reserved and given a defined meaning here. Other attributes can exists, but should not be passed to other programs

CamelCase is used due to it being used in the wiki

GENERAL

  • Type [interrogation, promemoria, confrontation, letter, etc]
  • Name A well formatted and descriptive name that will be the base name of the wiki page. If not overriden with NameeOverride, this is re-generated from document meta data on each access. Includes file extension (.pdf)
  • WikiName Same as above but after wiki name formatting and folding
  • NameeOverride If the name has been changed manually this variable contains the new name, else the empty string
  • Title A short sub-title
  • OriginalFilename Filename of the downloaded or imported file as first seen
  • Checksum MD5 checksum of the corresponding PDF-file
  • SectionCode A code refering to the ledger
  • InterrogationStart Date and/or time of interrogation start or generation of document
  • InterrogationEnd Date and/or time of interrogation end
  • PrintoutDate Date of printout of document (used in some templates)
  • Language sv
  • Interrogator Name of interrogator
  • Interrogated Name of interrogated
  • ShortSummary A short summary displayed on the wiki
  • Author Name of person generating a document if not the interrogator
  • Progress Stage of document in wiki-terms (e.g. proofread, waiting for OCR)
  • Notes Notes on the document, also used for OCR errors etc
  • Stage The stage in the import process [OCR, waiting, downloaded, etc]
  • Release True if doc should be deleted from a list in the next iteration

SPIDER

  • FoundAt Name of the page where the download link was found
  • FoundAtURL The URL from which the download link was found (if any)
  • DownloadURL The URL from which the pdf was downloaded
  • DownloadURLText Text tag from the above URL
  • DownloadDescription A description found in association with the download URL

IMPORT

  • NumberOfPages Number of pages contained in the corresponding PDF
  • SizeBytes PDF file size in bytes (int)
  • SlackDocShouldImportMsgTS ID of slack-message asking about import
  • SlackDocShouldImportResponse slack-response asking about import

DERIVED

  • PDFFilename <checksum>.pdf
  • JSONFilename <checksum>.json
  • PDFFilePath <directory/<checksum>.pdf
  • JSONFilePath <directory/<checksum>.json
  • SizeHuman PDF file size in human readable form incl units (str)
  • Directory <base_directory>/<Stage>/
  • OCRCacheTestFile Filename which if present should cause use of cached OCR results
  • WikiIndex Index:<WikiTitle>


LOG

list of dict log[<date>] = {text:str,...}


STAGES

downloading -> downloaded -> ocr_queue -> ocr -> ask_slack -> import
                                                    \------> waiting -> import

PROCESSES

MOPOCR2

Opens a checksum-named file with pdf and json, OCRs it, adds info in json and saves it back to output folder


wpu-spindeln

Crawls pages (incl the wiki), downloads and renames files. New files are shown to slack. If manual import is needet wpu-spindeln can split and import documents. Adds metadata to json