[00:00:18] <rajith> chirino: btw do you know any netty folks at RH?
[00:00:39] <chirino> I think vert.x folks are the closest thing to it
[00:00:49] <rajith> chirino: as u said, they've spent years honing this area and we should get help from them
[00:00:53] <rajith> chirino: gotcha
[09:10:19] * ChanServ sets mode: +o purplefox [10:12:36] <pmlopes> @purplefox, @temporalfox, @cescoffier good morning [10:57:01] * ChanServ sets mode: +o temporalfox
[11:15:58] <Sticky_> amr: fyi the saving document with an _id has been fixed but not released: https://github.com/vert-x3/vertx-mongo-client/pull/49/files
[11:32:36] <Sticky_> I take it it is not really possible to use generic types with codegen methods?
[11:47:48] <amr> ah super thanks!
[13:42:36] *** ChanServ sets mode: +o temporalfox
[17:23:30] <gemmellr> chirino: ping
[17:40:50] <michel_> Hi !
[17:41:16] <michel_> any service proxy nerds out there?
[17:45:01] <michel_> I'm curious to see if someone has been successful in implementing efficient error handling over the event bus through a proxy service (i.e: be able to rely on something else than plain error message with -1 error code)
[17:51:28] <chirino> gemmellr: pong
[17:52:37] <gemmellr> chirino: hi
[17:53:30] <gemmellr> chirino: first question was, should I add further commits to the <trivial> PR I raised earlier, or make more PRs as needed (given they would be dependent)
[17:54:43] <chirino> merged
[17:54:51] <gemmellr> chirino: next up, I was wondering about ProtonReceiverImpl's onDelivery() behaviour..but have since figured out it was working differently that I expected, so doenst really matter
[17:54:54] <chirino> you can keep adding :)
[17:55:07] <chirino> in the future.
[17:56:45] <gemmellr> chirino: I was a bit puzzled by the commented out advance() call and the pack of isPartial() checking…but I see now how its working
[17:57:28] <chirino> yeah I'm hazy on all the details too!
[17:58:00] <gemmellr> (settle calls advance if on the current message, and we only give them 1 message at a time until they settle it in the 'async' case)
[17:58:47] <chirino> ok
[18:00:00] <gemmellr> whereas I had been thinking more that we would give each message to them until the credit ran out (decision then being whether to expose a way to get more credit, or require they settle to get more)
[18:02:02] <chirino> so I was hoping we could receive all the initial credit window messages
[18:02:12] <chirino> then async settle them later
[18:02:29] <chirino> your saying user app will only receive 1 at the moment?
[18:02:56] <gemmellr> looks like it..without the advance it will stay on the 'current' message until it settles
[18:03:18] <chirino> could you add a unit test for that and fix it :)
[18:03:33] <gemmellr> I can try ;)
[18:08:48] <gemmellr> chirino: there actually already is one :)
[18:09:16] <gemmellr> noticing that earlier would have been quite useful, lol
[18:09:20] <chirino> yes.. well it does not test being able to receive multiple messages without an ack
[18:10:28] <chirino> I think
[18:10:29] <gemmellr> it tests that you dont get one until you ack..so itll need removed
[18:11:45] <chirino> so that's with flow(1)
[18:11:58] <chirino> so it's a valid test for that scenario
[18:12:25] <chirino> it basically shows that credit window is not expanded until settle.run()
[18:12:45] <chirino> need a test with like flow(10)
[18:12:54] <gemmellr> ah, good point, i missed the flow(1) down there
[18:16:30] <chirino> been thinking streaming http style message over links might be better than AMQP messages.
[18:16:57] <chirino> else encoding/decoding overhead and we can get real streaming
[18:20:26] <gemmellr> do you mean use transfer frames that carry non-amqp-message bytes?
[18:22:14] <chirino> yeah
[18:23:13] <gemmellr> interesting idea…would rather screw with integrating with many other AMQP peers though
[18:23:41] <chirino> yeah would not be true AMQP.
[18:24:31] <chirino> but would be an interesting experiment that (if it works better) you could take to the working group.
[18:25:35] <gemmellr> ive never tried it, but i believe you are meant to be able to send incomplete deliveries…at which point you can get 'true streaming' just using Data sections since you can send multiples of those in a message
[18:26:21] <chirino> oh
[18:26:31] <gemmellr> not that Proton-J directly supports that, it must be said…
[18:26:53] <gemmellr> and actually, i take it back…think you still need to know the total message size at the start, so wouldnt be true streaming as such
[18:30:53] <chirino> so a link can stream large bytes streams right?
[18:30:59] <chirino> you can just keep writing to it?
[18:31:07] <gemmellr> the protocol does already support sending such 'different-content' in transfer frames, its what the 'message-format' field on the frames is for…but as above, doing that means you mostly are only using the transport layer and anything using the messaging layer (which is basically everything) is going to choke when they see it
[18:31:09] <chirino> or does it have to buffer up?
[18:33:47] <gemmellr> do you mean the proton impl, or the spec? :)
[18:35:25] <gemmellr> links carry deliveries, which can consist of multiple transfer frames…each transfer frame needs to know its size, each message can have multiple 'data' sections which needs to know their as they are written…each transfer needs to know if its the last frame or not..so technically you can stream based on that
[18:36:49] <gemmellr> what exactly proton supports, is another question…im sure ive seen something in there about writing out the 'current' delivery before its done….I cant say i know anyone doing that…and certainly, the Message object in proton-j wont really support the usage if onyl because it doesnt support multiple data sections (but should)
[18:37:19] <gemmellr> there are other ways to encode data sections however…
[18:42:45] <gemmellr> ok I'm getting shouted at about the whereabouts of dinner…time I wasn't here
[18:44:02] <gemmellr> chirino: I raised another PR with a further small change in it…ill get to that other test+fix on Monday if I dont get a chance over the weekend…
[18:44:09] <chirino> ok!
[22:11:06] <drfits> Hi, is there any ORM for vert.x ?
[22:23:44] <Ashe`> drfits: https://github.com/aesteve/vertx-hibernate-service
[22:23:47] <Ashe`> something like that?
[22:24:15] <Ashe`> although I doubt hibernate likes asynchronicity much
[22:24:23] <Ashe`> (or the drivers behind it)
[22:25:36] <drfits> no, like http://docs.sequelizejs.com/en/latest/
[22:28:07] <drfits> I guess that no one want to have a hibernate in production with vert.x because in this case hibernate will be a bottleneck
[22:37:42] <drfits> which logger I should prefer? it seems that log message should be sent to message bus?