Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Queue-based transcoding w/ monq #104

Closed
wants to merge 18 commits into from

Conversation

mojodna
Copy link
Collaborator

@mojodna mojodna commented Sep 14, 2017

This replaces the bespoke queue / worker implementation with monq and adds a local worker that uses marblecutter-tools to transcode imagery. Status updates and final metadata are provided using HTTP callbacks.

Copy link
Contributor

@tombh tombh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is 🔥 Amazing to see how much simpler the code is with monq. Nice work.

Only real issue is that you ripped out the existing tests and haven't added any replacement.

@@ -0,0 +1,2 @@
test -f .env && dotenv
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's this file for?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://direnv.net/

It allows .env files to be used (by loading the contents into one's environment separately) without necessitating tools like foreman or libraries like dotenv that tie more tightly to .env files.

(It's really convenient for me but isn't necessary.)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, then I think this needs to be in .gitignore.

@@ -1,11 +1,31 @@
version: '2'
services:
app:
environment:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does mongo need to be in Docker? It's easy enough to install on all OSs and besides we need to access the Mongolab DBs in production.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I work on a lot of different projects with a wide variety of dependencies, so this is a convenient way to keep each set of service dependencies isolated. It's for a standalone dev environment, essentially.

Feel free to revert this and I keep keep an uncommitted version locally to facilitate development.



volumes:
- ./bin:/app/bin
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this for mounting development code? If so that should be put in the test/docker-compose.yml file, in fact, mounting live code is already solved in that file.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto for a local development environment. Each sub-directory / file is called out so that node_modules doesn't get mounted in (for cross-platform binary reasons).

I can keep this uncommitted locally.

@@ -144,6 +158,105 @@ module.exports = [
},

/**
* @api {post} /uploads/:id/:sceneIdx/:imageId Update imagery metadata
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't fully understand the purpose of this endpoint. I think it's run after process.sh has completed (whether sucessfully or not)? But I don't understand why it's over HTTP? process.sh has direct access to the DB, so why not just do this in process.sh? Or even better, in fact ideally, in bin/transcoder.js?

Copy link
Collaborator Author

@mojodna mojodna Sep 14, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's used for marking status generally--requests will be made when transcoding starts, when different stages are reached, and when it completes (successfully or not).

I'm trying to treat process.sh as a black-boxed external dependency to the extent I'm able. Mapzen uses this same script to transcode DEMs and needs to write footprint data to Postgres, so HTTP seemed like the simplest way to decouple them.

When I add the optional Batch transcoding backend (as an alternative to the bin/transcoder.jsmonq worker), this may make more sense (transcoding jobs are fully queued rather than having "callbacks" associated ith them--it doesn't seem to make sense to have each transcoding node open a connection to Mongo.

Copy link
Contributor

@tombh tombh Sep 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it doesn't seem to make sense to have each transcoding node open a connection to Mongo

Can you explain this in detail please?

I think there are a few problems with this approach. As mentioned, using HTTP, when there's a perfectly good Mongo socket connection already there. Supporting Batch is out of scope for this work, not to mention that Batch is platform-specific - in fact I'm sure that the "on-demand worker instance" paradigm is significantly cheaper almost everywhere apart from AWS. If we're going to be accomodating said paradigm, then there's broader benefits to supporting a platform agnostic method. However, I don't think OAM's transcoding load is going to scale to the point of seeing architectural cost-benefits for a long time and even then, the simpler approach is just to host somewhere cheaper. Also, process.sh should first and foremost be serving OAM, it doesn't need to cater to anyone else. And even if theoretically there was a justification for HTTP here, then strictly speaking it should exist as a seperate API, therefore a distinct service. Having this endpoint on the catalog is unintuitive and sets a bad precedent of blurring seperation of concerns.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smit1678 is Batch out of scope? I've been assuming that it's not, for these reasons:

  • we'd planned on incorporating it when the uploader got refactored to transcode imagery, but Nick ran out of time
  • OAM occasionally needs to handle ~50GB uploads and process them; in the current paradigm, this means keeping hundreds of GB of space available for when this does happen
  • after disasters, large quantities of imagery become available and should be ingested into OAM (I just did this with DG's hurricane-related imagery this/last week). being able to do this rapidly benefits many parties. using Batch makes this process much quicker--Irma imagery took ~12 hours to fully ingest because it could only process 4 images at a time (and I managed to use up the burst credits on the t2 instance; granted, an AWS-specific thing).
  • it represents progressive enhancement when running on AWS; the existing worker implementation (mongodb/monq) can be used when Batch isn't available

marblecutter-tools (which includes process.sh) was extracted and made generic based on the transcoding utilities in oam-dynamic-tiler. Mapzen is using these for DEM data, and re-adding application-specific code will only make it diverge again.

Transcoding is logically a separate service ("service / tool that produces COGs") distinct from the catalog API (which deals in metadata management + uploads). Without adding application-specific service dependencies (mongo), do you have any suggestions on how to communicate progress, status, and eventually submit generated metadata about items that have been queued?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is Batch out of scope? I've been assuming that it's not, for these reasons:

Correct, I don't think this is out of scope at all and is something we've been thinking about for a while. But I don't think scope is the right question here. Let's agree upon implementation method for how we talked about extracting and refactoring the dynamic-tiler into a set of tools. Let's make sure we're on the same page there.

@tombh
Copy link
Contributor

tombh commented Sep 14, 2017

For some reason the Travis build didn't trigger on this. But I know for a fact the tests won't pass for this PR. I've changed the Travis settings, so it should build on the next commit. So at the very least this won't get merged until the existing tests pass.

Edit: Just figured it out, the Travis build contains encrypted ENV vars, so external PRs will not run the integration tests. Seth, do you not have direct commit access on this repo?

@mojodna
Copy link
Collaborator Author

mojodna commented Sep 14, 2017

I didn't actually remove any tests--just the processImage stub for the old worker implementation.

1 test does currently fail when running npm test (it times out after 2s, which makes sense since the transcoding worker isn't running in that environment). I'll stub this out when I add the optional batch transcoding implementation.

(maybe this belongs in an issue) Can you update the docs on how to run the integration tests (via test/docker-compose.yml)? I think I got them started by doing the following:

  • providing real AWS credentials
  • running mongod --bind_ip 0.0.0.0 on my Mac (so the test environment can connect from within Docker)
  • updating process.env.DB_URL in config.js to point to my Mac's IP (since localhost is the VM hosting Docker despite Docker for Mac's port forwarding in the opposite direction)

However, it seems to fail to initialize while indexing the staging bucket:

test-app_1  | 7:03:15 PM worker.1     |  --- Reading from bucket: oin-hotosm-staging ---
test-app_1  | 7:03:17 PM worker.1     |  { InternalError: We encountered an internal error. Please try again.
test-app_1  | 7:03:17 PM worker.1     |      at Request.extractError (/host-app/node_modules/aws-sdk/lib/services/s3.js:577:35)
test-app_1  | 7:03:17 PM worker.1     |      at Request.callListeners (/host-app/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
test-app_1  | 7:03:17 PM worker.1     |      at Request.emit (/host-app/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
test-app_1  | 7:03:17 PM worker.1     |      at Request.emit (/host-app/node_modules/aws-sdk/lib/request.js:683:14)
test-app_1  | 7:03:17 PM worker.1     |      at Request.transition (/host-app/node_modules/aws-sdk/lib/request.js:22:10)
test-app_1  | 7:03:17 PM worker.1     |      at AcceptorStateMachine.runTo (/host-app/node_modules/aws-sdk/lib/state_machine.js:14:12)
test-app_1  | 7:03:17 PM worker.1     |      at /host-app/node_modules/aws-sdk/lib/state_machine.js:26:10
test-app_1  | 7:03:17 PM worker.1     |      at Request.<anonymous> (/host-app/node_modules/aws-sdk/lib/request.js:38:9)
test-app_1  | 7:03:17 PM worker.1     |      at Request.<anonymous> (/host-app/node_modules/aws-sdk/lib/request.js:685:12)
test-app_1  | 7:03:17 PM worker.1     |      at Request.callListeners (/host-app/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
test-app_1  | 7:03:17 PM worker.1     |      at Request.emit (/host-app/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
test-app_1  | 7:03:17 PM worker.1     |      at Request.emit (/host-app/node_modules/aws-sdk/lib/request.js:683:14)
test-app_1  | 7:03:17 PM worker.1     |      at Request.transition (/host-app/node_modules/aws-sdk/lib/request.js:22:10)
test-app_1  | 7:03:17 PM worker.1     |      at AcceptorStateMachine.runTo (/host-app/node_modules/aws-sdk/lib/state_machine.js:14:12)
test-app_1  | 7:03:17 PM worker.1     |      at /host-app/node_modules/aws-sdk/lib/state_machine.js:26:10
test-app_1  | 7:03:17 PM worker.1     |      at Request.<anonymous> (/host-app/node_modules/aws-sdk/lib/request.js:38:9)
test-app_1  | 7:03:17 PM worker.1     |      at Request.<anonymous> (/host-app/node_modules/aws-sdk/lib/request.js:685:12)
test-app_1  | 7:03:17 PM worker.1     |      at Request.callListeners (/host-app/node_modules/aws-sdk/lib/sequential_executor.js:115:18)
test-app_1  | 7:03:17 PM worker.1     |      at callNextListener (/host-app/node_modules/aws-sdk/lib/sequential_executor.js:95:12)
test-app_1  | 7:03:17 PM worker.1     |      at IncomingMessage.onEnd (/host-app/node_modules/aws-sdk/lib/event_listeners.js:269:13)
test-app_1  | 7:03:17 PM worker.1     |      at emitNone (events.js:91:20)
test-app_1  | 7:03:17 PM worker.1     |      at IncomingMessage.emit (events.js:185:7)
test-app_1  | 7:03:17 PM worker.1     |    message: 'We encountered an internal error. Please try again.',
test-app_1  | 7:03:17 PM worker.1     |    code: 'InternalError',
test-app_1  | 7:03:17 PM worker.1     |    region: null,
test-app_1  | 7:03:17 PM worker.1     |    time: 2017-09-14T19:03:17.514Z,
test-app_1  | 7:03:17 PM worker.1     |    requestId: '68571D9E5E657AC6',
test-app_1  | 7:03:17 PM worker.1     |    extendedRequestId: 'JPg0mA2kxPPrxQVEorNP2A9GAOv9Uw3RzSfyslkeBmrrVERT9C15NeQZHh4ggiCF4yq14SIx8Hg=',
test-app_1  | 7:03:17 PM worker.1     |    cfId: undefined,
test-app_1  | 7:03:17 PM worker.1     |    statusCode: 500,
test-app_1  | 7:03:17 PM worker.1     |    retryable: true }
test-app_1  | 7:03:17 PM worker.1     |  undefined

@tombh
Copy link
Contributor

tombh commented Sep 15, 2017

The tests are passing on Travis from the parent commit, so travis.yml can be a guide too. It should just be docker-compose -f test/docker-compose.yml up -d followed by mocha test/integration.

Are you using that .env I gave you? It looks like you're not picking up the S3 credentials.

You can update DB_URL in .env.

@smit1678
Copy link
Collaborator

Supporting Batch is out of scope for this work, not to mention that Batch is platform-specific - in fact I'm sure that the "on-demand worker instance" paradigm is significantly cheaper almost everywhere apart from AWS.

@tombh @mojodna What do we need to do to come to a consensus here? The current plan is to continue to stay on AWS and so we should be open to finding wins where we can by leveraging services with AWS. I think our time is better spent on other things than making sure we're platform agnostic so I think platform specific code shouldn't be an issue here.

@tombh
Copy link
Contributor

tombh commented Sep 15, 2017

The point is that supporting Batch is out of scope. The next sentence is personal opinion framed by the incidental rhetoric of my "not to mention". The main point is that platform-specific support is simply not part of the scope of this work, whereas "thorough refactoring" is. And even if it were theoretically part of scope, even then it doesn't justify the use of a new HTTP endpoint to update job statuses.

Edit: And completely off topic, I would very strongly disagree that our time is better spent making sure we're platform agnostic. I know for a fact that the tiler's necessity for Lamdba has cost me significant development and debugging time.

@mojodna
Copy link
Collaborator Author

mojodna commented Sep 15, 2017

I know for a fact that the tiler's necessity for Lamdba has cost me significant development and debugging time.

Can you elaborate / ask for help?

If you need to run it locally, there's a Flask version of the server available.

Requires a queue and job definition to exist and be configured.
@tombh
Copy link
Contributor

tombh commented Sep 18, 2017

Firstly, thanks for fixing that test. I've checked and it passes.
However the integration test is still failing. That's the one that
actually tests a real upload using a test container under
test/docker-compose.yml. The test can be run with
mocha test/integration. There are cross-browser tests too,
but don't worry about those, nothing you've done should affect them.

Also, whilst I still strongly disagree with the use of Batch, your
implementation is small, concise and optional, so I don't have a big
problem with it being here. I think we're getting off topic discussing
it further, so let's leave it at that.

Now, most importantly this HTTP endpoint. I have a better idea of
where you're coming from, but I still need some clarification, because
it still feels like we're missing a really easy refactoring win. Just
to try and break this down a bit, I want to use the example of profile
picture thumbnail creation. You know, you sign up for a new account
somewhere, upload your huge profile image and the site has to shrink
it down for you. This of course requires something like imagemagick
bindings and a worker process. So the steps in the worker process are;

  1. Get image
  2. Scale image
  3. Copy image to permanent storage.

I want to note that imagemagick only does step 2.

Now lets consider Marblecutter Tools' process.sh;

  1. Download raw image data
  2. Transcode to COG-compatibility
  3. Thumbnail creation
  4. Footprint creation
  5. Upload to permanent storage

To my mind, a refactored tool like Marblecutter Tools and
imagemagick, are logically not responsible for uploading/downloading
media. Fundamentally, Marblecutter is a dynamic tiler, it is not for
it to preempt how people use it. Not everyone uses S3-compatible
services and not everyone needs 50GB of temp storage for their COGs.
So that just leaves steps 2, 3 and 4. In fact, not everyone even wants
steps 2 and 3. Anyway, there's already a refactored tool for 2,
transcode.sh, great. Then what if there were similar tools for 3 and
4? thumbnail.sh -width [width] -height [height] and footprint.sh?
Or even better a suite of subcommands under something like mbtools?
So you could do things like;

mbtools mkcog florida.tiff
mbtools thumbnail -width 300 -height 300 florida.tiff
mbtools footprint florida.tiff

This is what I've spoken about before in regards to making
Marblecutter Tools installable via pip or npm. I know there is an
unsolved issue about the dependency on a specific gdal version, but
I'm sure, later down the line (certainly not part of this PR) we can
use pip's wheels to include prebuilt binaries. So ideally, anyone
that wants to take advantage of Marblecutter just needs a
requirements.txt in their project and a pip install, which maybe
they might already have if its a python project anyway. Then they have
the freedom to write their own process.sh using whatever
technologies they see fit. They don't need to depend on an opinionated
Docker image, S3 or a prescribed HTTP endpoint to get progress
updates. Not only are such dependencies difficult to accommodate in a
production environment, they are troublesome to setup locally for
debugging, updating code for PRs and testing. In fact, on the topic of
sending PRs to Marbleccutter Tools, it's not at all ideal for OAM to
have to send a PR to marblecutter-tools to change anything in
process.sh - like changing the thumbnail resolution (something we
actually desperately need to do, as you know they're too large at the
moment).

All of which to say Mapzen and OAM should be responsible for their own
process.sh. I see no reason for this not to be the case, other than
the effort required to refactor something like mbtools, which as far
as I understood was where we were heading anyway with the scope of
this work. OAM's process.sh might look like;

import 'db';
import 'worker';
import 's3';
import 'metadata_gen';

worker.progress('downloading gtiff');
s3.get(gtiff);

worker.progress('converting to cog');
spawn('mbtools mkcog' + gtiff);

worker.progress('generating thumbnail');
spawn('mbtools thumbnail -width 300 -height 300' + gtiff);

worker.progress('generating footprint');
spawn('mbtools footprint' + gtiff);

worker.progress('generating metadata');
meta = metadata_gen(gtiff);

worker.progress('uploading assets');
s3.put(gtiff);
s3.put(meta);

worker.progress('completed');

@smit1678
Copy link
Collaborator

@tombh Can you please break the refactor and other ideas out into separate tickets? We can then come back to see how we can continue to improve.

@tombh @mojodna To make sure we’re clear on the discussion at hand. The open and on topic questions are:

  • fixing the integration test. Is this something that needs to be fixed?

  • HTTP endpoint (i’m not sure of the details). Is this something that we can come back to as a future item to work on and optimize?

@tombh
Copy link
Contributor

tombh commented Sep 20, 2017

I understand the refactoring and the HTTP endpoint to be precisely the same topic. I don't think it's right to make it into another ticket or come back to it later. I need to understand the thinking behind why an HTTP endpoint is the right approach here.

Yes the integration test still needs to be fixed.

@mojodna
Copy link
Collaborator Author

mojodna commented Sep 20, 2017

I can't get the integration tests to run in my environment:

$ mocha test/integration


(node:93701) DeprecationWarning: `open()` is deprecated in mongoose >= 4.11.0, use `openUri()` instead, or set the `useMongoClient` option if using `connect()` or `createConnection()`. See http://mongoosejs.com/docs/connections.html#use-mongo-client
  Imagery CRUD
(node:93701) DeprecationWarning: Mongoose: mpromise (mongoose's default promise library) is deprecated, plug in your own promise library instead: http://mongoosejs.com/docs/promises.html
{ '0':
   { Error: connect ECONNREFUSED 127.0.0.1:4000
       at Object._errnoException (util.js:1026:11)
       at _exceptionWithHostPort (util.js:1049:20)
       at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1174:14)
     code: 'ECONNREFUSED',
     errno: 'ECONNREFUSED',
     syscall: 'connect',
     address: '127.0.0.1',
     port: 4000 } }
    1) "before each" hook


  0 passing (84ms)
  1 failing

  1) Imagery CRUD "before each" hook:
     Uncaught TypeError: Cannot read property 'statusCode' of undefined
      at Request._callback (test/integration/helper.js:63:26)
      at self.callback (node_modules/request/request.js:188:22)
      at Request.onRequestError (node_modules/request/request.js:884:8)
      at Socket.socketErrorListener (_http_client.js:401:9)
      at emitErrorNT (internal/streams/destroy.js:64:8)
      at _combinedTickCallback (internal/process/next_tick.js:138:11)
      at process._tickDomainCallback (internal/process/next_tick.js:218:9)

docker-compose output:

$ docker-compose -f test/docker-compose.yml up
WARNING: The APP_FROM variable is not set. Defaulting to a blank string.
Starting test_test-app_1 ...
Starting test_test-app_1 ... done
Attaching to test_test-app_1
test-app_1  | [OKAY] Loaded ENV .env File as KEY=VALUE Format
test-app_1  | 5:40:55 PM worker.1     |  Starting catalog worker...
test-app_1  | Starting up http-server, serving ./test
test-app_1  | Available on:
test-app_1  |   http://127.0.0.1:8080
test-app_1  |   http://192.168.65.2:8080
test-app_1  |   http://172.18.0.1:8080
test-app_1  | Hit CTRL-C to stop the server
test-app_1  | 5:40:58 PM worker.1     |  (node:28) DeprecationWarning: `open()` is deprecated in mongoose >= 4.11.0, use `openUri()` instead, or set the `useMongoClient` option if using `connect()` or `createConnection()`. See http://mongoosejs.com/docs/connections.html#use-mongo-client
test-app_1  | 5:40:58 PM worker.1     |  Successfully connected to mongodb://10.0.1.49:27017/oam-api-test
test-app_1  | 5:41:00 PM worker.1     |  Running a catalog worker (cron time: */15 * * * * *)
test-app_1  | 5:41:00 PM worker.1     |  Last system update time: 1970-01-01T00:00:00.000Z
test-app_1  | [Wed Sep 20 2017 17:41:00 GMT+0000 (UTC)] "GET /fixtures/oin-buckets.json" "undefined"
test-app_1  | 5:41:00 PM worker.1     |  --- Started indexing all buckets ---
test-app_1  | 5:41:00 PM worker.1     |  --- Reading from bucket: oin-hotosm-staging ---
test-app_1  | 5:41:00 PM web.1        |  (node:20) DeprecationWarning: `open()` is deprecated in mongoose >= 4.11.0, use `openUri()` instead, or set the `useMongoClient` option if using `connect()` or `createConnection()`. See http://mongoosejs.com/docs/connections.html#use-mongo-client
test-app_1  | 5:41:01 PM worker.1     |  [meta] http://oin-hotosm-staging.s3.amazonaws.com/development/593b6c15809dec0012c8e8c8/0/f6034f61-1994-4fcf-8300-cc50515a71a7.tif added!
test-app_1  | 5:41:01 PM worker.1     |  [meta] http://oin-hotosm-staging.s3.amazonaws.com/development/593b7c75aa640500140c3139/0/0b4d8ecb-a11f-484b-aed6-26ee86f50678.tif added!
test-app_1  | 5:41:01 PM worker.1     |  --- Finished indexing all buckets ---
test-app_1  | 5:41:01 PM worker.1     |  --- Added new analytics record ---
test-app_1  | 5:41:01 PM web.1        |  Server (test) running at: http://moby:4000
test-app_1  | 5:41:01 PM web.1        |  Successfully connected to mongodb://10.0.1.49:27017/oam-api-test

How:

  • set DB_URI=mongodb://<mongo IP>:27017/oam-api-test
  • in one window / tab, run mongod --bind_ip 0.0.0.0
  • in another window / tab, run docker-compose -f test/docker-compose.yml up
  • in a 3rd window / tab, run mocha test/integration (or curl http://localhost:4000, which also fails)

Docker Compose appears to be set up to expose :4000, so I don't know why I can't connect. Have you seen this?

@mojodna
Copy link
Collaborator Author

mojodna commented Sep 20, 2017

For the HTTP endpoint used to update status / provide metadata, what do you (or anyone) see as alternatives? (I can't think of any reasonable alternatives right now.)

Mongo isn't an option because it's an implementation detail specific to OAM and workers need to update from wherever they're running (there may be 1 running locally or there may be 1000 running across a cluster somewhere with available CPU; it feels like optimizing for the latter makes the former possible but not vice versa).

Re: refactoring, all of that makes sense (and it's very helpful to see the process broken out through someone else's eyes), though beyond being cleaner, I don't see any immediate benefits. Steps 1 and 5 only occur if remote URIs are provided, and thumbnailing is really the only optional part of the process (thumbnail size can be set using THUMBNAIL_SIZE, which is the target size in KB).

This was necessary to get the tests running on macOS where Mongo was
running on the host (binding to 0.0.0.0), docker-compose'd app running
in Docker (proxied to localhost using Docker for Mac), and the tests
running on the host.
sensor: scene.sensor
},
title: scene.title
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential metadata changes (these all match, but there may be parts that are missing).

};
}), callback);

// replace the urls list with a list of _id's
// NOTE this causes side-effects
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think these side-effects (which have always been present) are problematic, but they're something to look out for.

meta.gsd = request.payload.properties.resolution_in_meters;
meta.meta_uri = meta.uuid.replace(/\.tif$/, '_meta.json');
meta.properties = Object.assign(meta.properties, request.payload.properties);
meta.properties.tms = `${config.tilerBaseUrl}/${request.params.id}/${request.params.scesceneIdx}/${request.params.imageId}/{z}/{x}/{y}.png`;
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

^^ potential API changes

(especially GeoJSON vs. WKT)

@mojodna mojodna mentioned this pull request Nov 27, 2017
@mojodna mojodna closed this Nov 27, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants