Overcome Queueable maximum depth limit in dev orgs












2















To migrate "trees of related data" from an external system to Salesforce I am using dynamic chains of Apex Queueables. Imagine I am "synchronizing" Accounts, Opportunities and Contacts from an external CRM into Salesforce.



There is a separate Queueable class for each object type and to stay within limits each Queueable is just migrating a certain amount of objects. This is how a typical flow could look like:





  1. AccountQueueable: Get 10 Accounts


  2. ContactQueueable: Get 1000 Contacts of that 10 Accounts

  3. OpportunityQueueable: Get 200 Opptys from Contacts and Accounts in 1./2.


  4. AccountQueueable: Rerun for next 10 accounts



  5. ContactQueueable: Get related Contacts
    ...you get the scheme


This works with 3 related object types but if I get more I am unable to run even the smallest scenario in my dev org because I hit the limit document here https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_queueing_jobs.htm




For Developer Edition and Trial organizations, the maximum stack depth
for chained jobs is 5, which means that you can chain jobs four times
and the maximum number of jobs in the chain is 5, including the
initial parent queueable job.




I can't use Batch as I am not working on a single object. I also can't increase the limit as Salesforce told me that's a hard limit of dev orgs.



Maybe my overall approach is wrong?! What would you do here?










share|improve this question


















  • 1





    If I understand this right, you have custom queueable classes for data synchronization purposes? If that's right, have you explored the options of ETL tools? I understand that it comes with a price, but with an upfront investment, at least you will not end up with some complex logic hitting limits at times which becomes difficult to troubleshoot in future. You will end up a lesser capex vs. opex if you opt ETL.

    – Jayant Das
    12 hours ago











  • I fully agree. This is legacy code... Any recommendations on how to start small. I know there is Mulesoft and I guess it costs a fortune. What route do you recommend for me to start small ETL wise without being locked in by a vendor to early.

    – Robert Sösemann
    12 hours ago











  • I use Talend, it's free and has great sf connector. It also provides you java /python code as well for the transformation you did so you can just plug that code anywhere if needed. Bang on product

    – Pranay Jaiswal
    12 hours ago











  • I have worked with Informatica and that's quite useful too. There are others viz., MuleSoft available in the market. And in your situation, I think the best will be to invest a bit on researching the best suited for your use case and then take the final call. I think there's always a trial version available for most of the products.

    – Jayant Das
    12 hours ago
















2















To migrate "trees of related data" from an external system to Salesforce I am using dynamic chains of Apex Queueables. Imagine I am "synchronizing" Accounts, Opportunities and Contacts from an external CRM into Salesforce.



There is a separate Queueable class for each object type and to stay within limits each Queueable is just migrating a certain amount of objects. This is how a typical flow could look like:





  1. AccountQueueable: Get 10 Accounts


  2. ContactQueueable: Get 1000 Contacts of that 10 Accounts

  3. OpportunityQueueable: Get 200 Opptys from Contacts and Accounts in 1./2.


  4. AccountQueueable: Rerun for next 10 accounts



  5. ContactQueueable: Get related Contacts
    ...you get the scheme


This works with 3 related object types but if I get more I am unable to run even the smallest scenario in my dev org because I hit the limit document here https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_queueing_jobs.htm




For Developer Edition and Trial organizations, the maximum stack depth
for chained jobs is 5, which means that you can chain jobs four times
and the maximum number of jobs in the chain is 5, including the
initial parent queueable job.




I can't use Batch as I am not working on a single object. I also can't increase the limit as Salesforce told me that's a hard limit of dev orgs.



Maybe my overall approach is wrong?! What would you do here?










share|improve this question


















  • 1





    If I understand this right, you have custom queueable classes for data synchronization purposes? If that's right, have you explored the options of ETL tools? I understand that it comes with a price, but with an upfront investment, at least you will not end up with some complex logic hitting limits at times which becomes difficult to troubleshoot in future. You will end up a lesser capex vs. opex if you opt ETL.

    – Jayant Das
    12 hours ago











  • I fully agree. This is legacy code... Any recommendations on how to start small. I know there is Mulesoft and I guess it costs a fortune. What route do you recommend for me to start small ETL wise without being locked in by a vendor to early.

    – Robert Sösemann
    12 hours ago











  • I use Talend, it's free and has great sf connector. It also provides you java /python code as well for the transformation you did so you can just plug that code anywhere if needed. Bang on product

    – Pranay Jaiswal
    12 hours ago











  • I have worked with Informatica and that's quite useful too. There are others viz., MuleSoft available in the market. And in your situation, I think the best will be to invest a bit on researching the best suited for your use case and then take the final call. I think there's always a trial version available for most of the products.

    – Jayant Das
    12 hours ago














2












2








2








To migrate "trees of related data" from an external system to Salesforce I am using dynamic chains of Apex Queueables. Imagine I am "synchronizing" Accounts, Opportunities and Contacts from an external CRM into Salesforce.



There is a separate Queueable class for each object type and to stay within limits each Queueable is just migrating a certain amount of objects. This is how a typical flow could look like:





  1. AccountQueueable: Get 10 Accounts


  2. ContactQueueable: Get 1000 Contacts of that 10 Accounts

  3. OpportunityQueueable: Get 200 Opptys from Contacts and Accounts in 1./2.


  4. AccountQueueable: Rerun for next 10 accounts



  5. ContactQueueable: Get related Contacts
    ...you get the scheme


This works with 3 related object types but if I get more I am unable to run even the smallest scenario in my dev org because I hit the limit document here https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_queueing_jobs.htm




For Developer Edition and Trial organizations, the maximum stack depth
for chained jobs is 5, which means that you can chain jobs four times
and the maximum number of jobs in the chain is 5, including the
initial parent queueable job.




I can't use Batch as I am not working on a single object. I also can't increase the limit as Salesforce told me that's a hard limit of dev orgs.



Maybe my overall approach is wrong?! What would you do here?










share|improve this question














To migrate "trees of related data" from an external system to Salesforce I am using dynamic chains of Apex Queueables. Imagine I am "synchronizing" Accounts, Opportunities and Contacts from an external CRM into Salesforce.



There is a separate Queueable class for each object type and to stay within limits each Queueable is just migrating a certain amount of objects. This is how a typical flow could look like:





  1. AccountQueueable: Get 10 Accounts


  2. ContactQueueable: Get 1000 Contacts of that 10 Accounts

  3. OpportunityQueueable: Get 200 Opptys from Contacts and Accounts in 1./2.


  4. AccountQueueable: Rerun for next 10 accounts



  5. ContactQueueable: Get related Contacts
    ...you get the scheme


This works with 3 related object types but if I get more I am unable to run even the smallest scenario in my dev org because I hit the limit document here https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_queueing_jobs.htm




For Developer Edition and Trial organizations, the maximum stack depth
for chained jobs is 5, which means that you can chain jobs four times
and the maximum number of jobs in the chain is 5, including the
initial parent queueable job.




I can't use Batch as I am not working on a single object. I also can't increase the limit as Salesforce told me that's a hard limit of dev orgs.



Maybe my overall approach is wrong?! What would you do here?







governorlimits asynchronous queueable-apex externalobjects






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked 13 hours ago









Robert SösemannRobert Sösemann

12.9k1177215




12.9k1177215








  • 1





    If I understand this right, you have custom queueable classes for data synchronization purposes? If that's right, have you explored the options of ETL tools? I understand that it comes with a price, but with an upfront investment, at least you will not end up with some complex logic hitting limits at times which becomes difficult to troubleshoot in future. You will end up a lesser capex vs. opex if you opt ETL.

    – Jayant Das
    12 hours ago











  • I fully agree. This is legacy code... Any recommendations on how to start small. I know there is Mulesoft and I guess it costs a fortune. What route do you recommend for me to start small ETL wise without being locked in by a vendor to early.

    – Robert Sösemann
    12 hours ago











  • I use Talend, it's free and has great sf connector. It also provides you java /python code as well for the transformation you did so you can just plug that code anywhere if needed. Bang on product

    – Pranay Jaiswal
    12 hours ago











  • I have worked with Informatica and that's quite useful too. There are others viz., MuleSoft available in the market. And in your situation, I think the best will be to invest a bit on researching the best suited for your use case and then take the final call. I think there's always a trial version available for most of the products.

    – Jayant Das
    12 hours ago














  • 1





    If I understand this right, you have custom queueable classes for data synchronization purposes? If that's right, have you explored the options of ETL tools? I understand that it comes with a price, but with an upfront investment, at least you will not end up with some complex logic hitting limits at times which becomes difficult to troubleshoot in future. You will end up a lesser capex vs. opex if you opt ETL.

    – Jayant Das
    12 hours ago











  • I fully agree. This is legacy code... Any recommendations on how to start small. I know there is Mulesoft and I guess it costs a fortune. What route do you recommend for me to start small ETL wise without being locked in by a vendor to early.

    – Robert Sösemann
    12 hours ago











  • I use Talend, it's free and has great sf connector. It also provides you java /python code as well for the transformation you did so you can just plug that code anywhere if needed. Bang on product

    – Pranay Jaiswal
    12 hours ago











  • I have worked with Informatica and that's quite useful too. There are others viz., MuleSoft available in the market. And in your situation, I think the best will be to invest a bit on researching the best suited for your use case and then take the final call. I think there's always a trial version available for most of the products.

    – Jayant Das
    12 hours ago








1




1





If I understand this right, you have custom queueable classes for data synchronization purposes? If that's right, have you explored the options of ETL tools? I understand that it comes with a price, but with an upfront investment, at least you will not end up with some complex logic hitting limits at times which becomes difficult to troubleshoot in future. You will end up a lesser capex vs. opex if you opt ETL.

– Jayant Das
12 hours ago





If I understand this right, you have custom queueable classes for data synchronization purposes? If that's right, have you explored the options of ETL tools? I understand that it comes with a price, but with an upfront investment, at least you will not end up with some complex logic hitting limits at times which becomes difficult to troubleshoot in future. You will end up a lesser capex vs. opex if you opt ETL.

– Jayant Das
12 hours ago













I fully agree. This is legacy code... Any recommendations on how to start small. I know there is Mulesoft and I guess it costs a fortune. What route do you recommend for me to start small ETL wise without being locked in by a vendor to early.

– Robert Sösemann
12 hours ago





I fully agree. This is legacy code... Any recommendations on how to start small. I know there is Mulesoft and I guess it costs a fortune. What route do you recommend for me to start small ETL wise without being locked in by a vendor to early.

– Robert Sösemann
12 hours ago













I use Talend, it's free and has great sf connector. It also provides you java /python code as well for the transformation you did so you can just plug that code anywhere if needed. Bang on product

– Pranay Jaiswal
12 hours ago





I use Talend, it's free and has great sf connector. It also provides you java /python code as well for the transformation you did so you can just plug that code anywhere if needed. Bang on product

– Pranay Jaiswal
12 hours ago













I have worked with Informatica and that's quite useful too. There are others viz., MuleSoft available in the market. And in your situation, I think the best will be to invest a bit on researching the best suited for your use case and then take the final call. I think there's always a trial version available for most of the products.

– Jayant Das
12 hours ago





I have worked with Informatica and that's quite useful too. There are others viz., MuleSoft available in the market. And in your situation, I think the best will be to invest a bit on researching the best suited for your use case and then take the final call. I think there's always a trial version available for most of the products.

– Jayant Das
12 hours ago










3 Answers
3






active

oldest

votes


















3














I'd say use a batchable class. What you need is a dynamic approach. Even though you're working with multiple objects, a batch class can still be used here. Here's a design pattern for you:



public class DynamicBatch implements Database.Batchable<batchAction>, Database.Stateful {
class StateInfo {
public Account accounts = new Account[0];
public Contact contacts = new Contact[0];
public Opportunity opps = new Opportunity[0];
// ...
}
StateInfo state = new StateInfo();
interface batchAction {
void execute(StateInfo state) {
}
class AccountAction implements BatchAction {
void execute(StateInfo state) {
// ...
}
}
class ContactAction implements BatchAction {
void execute(StateInfo state) {
// ...
}
}
class OpportunityAction implements BatchAction {
void execute(StateInfo state) {
// ...
}
}
public batchAction start(Database.BatchableContext context) {
return new batchAction { new AccountAction(), new ContactAction(), new OpportunityAction() };
}
public void execute(Database.BatchableContext context, batchAction scope) {
scope[0].execute(state);
}
public void finish(Database.BatchableContext context) {
if(!finished()) {
Database.executeBatch(new DynamicBatch());
}
}
// ...
}


You can adjust this as you like, but hopefully you get the general idea. This batch class is called with a scope size of 1. This behaves like an unkillable Queueable and can be chained indefinitely, unlike Queueable calls. This also avoids "hacks" like swapping back and forth between future/queueable or some other design.






share|improve this answer
























  • Awesome! As always!

    – Robert Sösemann
    11 hours ago











  • Let me try that out. I leave the question open as I am sure I will have some follow up questions.

    – Robert Sösemann
    11 hours ago











  • Sorry, maybe it's already a bit late but I still don't get it. Imagine I am going to tranfer Accounts, Contact and Opptys from an external CRM (same structure and dependencies as in SFDC). Accounts are my dependency-less objects and I am moving over 10.000 accounts with related Contacts and Opptys. As I understand your code the batch runs on generic jobs where I triple consist of Account, Contact and Opptys. Could you assume its moving from one org to the other and add some code to showcase what happens where?! Or do you have a Github repo for this ?

    – Robert Sösemann
    5 hours ago








  • 1





    @RobertSösemann Each interface-implementing class would perform the appropriate callouts (don't forget Database.AllowsCallouts), then insert/update/upsert whatever based on criteria, then go on the net step. I suppose I could write in a more comprehensive edit if you'd like.

    – sfdcfox
    5 hours ago











  • I’d spend you a beer or two next Dreamforce. Maybe in a github gist?!

    – Robert Sösemann
    3 hours ago



















1














If you run a Batchable from Account, the execute method could query all the necessary child records and act accordingly. If the transaction limits would get busted there, spawn a Queueable from the batch. You won't reach maximum stack depth unless any one Queueable launched by the batch passes 5 deep.






share|improve this answer































    1














    Well, I have a hack



    We cant call future from future, but we can call Future from a Queueable and Queueable from future.



    So from the 5th Queuable call the future, and then that Future can call another Queueable to have an infinite chain in Developer orgs.



    Edit: I did a demo of recursion, calling Future from Queuable and from Queuable call future, I was able to chain over 400+ levels deep before my Daily Async Apex Limit ended.






    share|improve this answer


























    • Aren't there similar limitations regarding futures in Dev Orgs?

      – Robert Sösemann
      12 hours ago






    • 1





      We cant call future from future, so thats there, You can call 1 Queuable from Future. Thats what you need to restart your chain. No more than 0 in batch and future contexts; 1 in queueable context method calls per Apex invocation

      – Pranay Jaiswal
      12 hours ago











    • Sounds great. What's the drawback. Or where does this start to be hacky?

      – Robert Sösemann
      12 hours ago






    • 1





      The future method does not return enqueued job id, thus you lose track of what's going on. I cant think of anything else at the moment. I made an engine which would go 8 level deep for some data import work like you, didnt disappoint me.

      – Pranay Jaiswal
      12 hours ago











    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "459"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f253134%2fovercome-queueable-maximum-depth-limit-in-dev-orgs%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3














    I'd say use a batchable class. What you need is a dynamic approach. Even though you're working with multiple objects, a batch class can still be used here. Here's a design pattern for you:



    public class DynamicBatch implements Database.Batchable<batchAction>, Database.Stateful {
    class StateInfo {
    public Account accounts = new Account[0];
    public Contact contacts = new Contact[0];
    public Opportunity opps = new Opportunity[0];
    // ...
    }
    StateInfo state = new StateInfo();
    interface batchAction {
    void execute(StateInfo state) {
    }
    class AccountAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    class ContactAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    class OpportunityAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    public batchAction start(Database.BatchableContext context) {
    return new batchAction { new AccountAction(), new ContactAction(), new OpportunityAction() };
    }
    public void execute(Database.BatchableContext context, batchAction scope) {
    scope[0].execute(state);
    }
    public void finish(Database.BatchableContext context) {
    if(!finished()) {
    Database.executeBatch(new DynamicBatch());
    }
    }
    // ...
    }


    You can adjust this as you like, but hopefully you get the general idea. This batch class is called with a scope size of 1. This behaves like an unkillable Queueable and can be chained indefinitely, unlike Queueable calls. This also avoids "hacks" like swapping back and forth between future/queueable or some other design.






    share|improve this answer
























    • Awesome! As always!

      – Robert Sösemann
      11 hours ago











    • Let me try that out. I leave the question open as I am sure I will have some follow up questions.

      – Robert Sösemann
      11 hours ago











    • Sorry, maybe it's already a bit late but I still don't get it. Imagine I am going to tranfer Accounts, Contact and Opptys from an external CRM (same structure and dependencies as in SFDC). Accounts are my dependency-less objects and I am moving over 10.000 accounts with related Contacts and Opptys. As I understand your code the batch runs on generic jobs where I triple consist of Account, Contact and Opptys. Could you assume its moving from one org to the other and add some code to showcase what happens where?! Or do you have a Github repo for this ?

      – Robert Sösemann
      5 hours ago








    • 1





      @RobertSösemann Each interface-implementing class would perform the appropriate callouts (don't forget Database.AllowsCallouts), then insert/update/upsert whatever based on criteria, then go on the net step. I suppose I could write in a more comprehensive edit if you'd like.

      – sfdcfox
      5 hours ago











    • I’d spend you a beer or two next Dreamforce. Maybe in a github gist?!

      – Robert Sösemann
      3 hours ago
















    3














    I'd say use a batchable class. What you need is a dynamic approach. Even though you're working with multiple objects, a batch class can still be used here. Here's a design pattern for you:



    public class DynamicBatch implements Database.Batchable<batchAction>, Database.Stateful {
    class StateInfo {
    public Account accounts = new Account[0];
    public Contact contacts = new Contact[0];
    public Opportunity opps = new Opportunity[0];
    // ...
    }
    StateInfo state = new StateInfo();
    interface batchAction {
    void execute(StateInfo state) {
    }
    class AccountAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    class ContactAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    class OpportunityAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    public batchAction start(Database.BatchableContext context) {
    return new batchAction { new AccountAction(), new ContactAction(), new OpportunityAction() };
    }
    public void execute(Database.BatchableContext context, batchAction scope) {
    scope[0].execute(state);
    }
    public void finish(Database.BatchableContext context) {
    if(!finished()) {
    Database.executeBatch(new DynamicBatch());
    }
    }
    // ...
    }


    You can adjust this as you like, but hopefully you get the general idea. This batch class is called with a scope size of 1. This behaves like an unkillable Queueable and can be chained indefinitely, unlike Queueable calls. This also avoids "hacks" like swapping back and forth between future/queueable or some other design.






    share|improve this answer
























    • Awesome! As always!

      – Robert Sösemann
      11 hours ago











    • Let me try that out. I leave the question open as I am sure I will have some follow up questions.

      – Robert Sösemann
      11 hours ago











    • Sorry, maybe it's already a bit late but I still don't get it. Imagine I am going to tranfer Accounts, Contact and Opptys from an external CRM (same structure and dependencies as in SFDC). Accounts are my dependency-less objects and I am moving over 10.000 accounts with related Contacts and Opptys. As I understand your code the batch runs on generic jobs where I triple consist of Account, Contact and Opptys. Could you assume its moving from one org to the other and add some code to showcase what happens where?! Or do you have a Github repo for this ?

      – Robert Sösemann
      5 hours ago








    • 1





      @RobertSösemann Each interface-implementing class would perform the appropriate callouts (don't forget Database.AllowsCallouts), then insert/update/upsert whatever based on criteria, then go on the net step. I suppose I could write in a more comprehensive edit if you'd like.

      – sfdcfox
      5 hours ago











    • I’d spend you a beer or two next Dreamforce. Maybe in a github gist?!

      – Robert Sösemann
      3 hours ago














    3












    3








    3







    I'd say use a batchable class. What you need is a dynamic approach. Even though you're working with multiple objects, a batch class can still be used here. Here's a design pattern for you:



    public class DynamicBatch implements Database.Batchable<batchAction>, Database.Stateful {
    class StateInfo {
    public Account accounts = new Account[0];
    public Contact contacts = new Contact[0];
    public Opportunity opps = new Opportunity[0];
    // ...
    }
    StateInfo state = new StateInfo();
    interface batchAction {
    void execute(StateInfo state) {
    }
    class AccountAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    class ContactAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    class OpportunityAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    public batchAction start(Database.BatchableContext context) {
    return new batchAction { new AccountAction(), new ContactAction(), new OpportunityAction() };
    }
    public void execute(Database.BatchableContext context, batchAction scope) {
    scope[0].execute(state);
    }
    public void finish(Database.BatchableContext context) {
    if(!finished()) {
    Database.executeBatch(new DynamicBatch());
    }
    }
    // ...
    }


    You can adjust this as you like, but hopefully you get the general idea. This batch class is called with a scope size of 1. This behaves like an unkillable Queueable and can be chained indefinitely, unlike Queueable calls. This also avoids "hacks" like swapping back and forth between future/queueable or some other design.






    share|improve this answer













    I'd say use a batchable class. What you need is a dynamic approach. Even though you're working with multiple objects, a batch class can still be used here. Here's a design pattern for you:



    public class DynamicBatch implements Database.Batchable<batchAction>, Database.Stateful {
    class StateInfo {
    public Account accounts = new Account[0];
    public Contact contacts = new Contact[0];
    public Opportunity opps = new Opportunity[0];
    // ...
    }
    StateInfo state = new StateInfo();
    interface batchAction {
    void execute(StateInfo state) {
    }
    class AccountAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    class ContactAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    class OpportunityAction implements BatchAction {
    void execute(StateInfo state) {
    // ...
    }
    }
    public batchAction start(Database.BatchableContext context) {
    return new batchAction { new AccountAction(), new ContactAction(), new OpportunityAction() };
    }
    public void execute(Database.BatchableContext context, batchAction scope) {
    scope[0].execute(state);
    }
    public void finish(Database.BatchableContext context) {
    if(!finished()) {
    Database.executeBatch(new DynamicBatch());
    }
    }
    // ...
    }


    You can adjust this as you like, but hopefully you get the general idea. This batch class is called with a scope size of 1. This behaves like an unkillable Queueable and can be chained indefinitely, unlike Queueable calls. This also avoids "hacks" like swapping back and forth between future/queueable or some other design.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered 11 hours ago









    sfdcfoxsfdcfox

    258k12202445




    258k12202445













    • Awesome! As always!

      – Robert Sösemann
      11 hours ago











    • Let me try that out. I leave the question open as I am sure I will have some follow up questions.

      – Robert Sösemann
      11 hours ago











    • Sorry, maybe it's already a bit late but I still don't get it. Imagine I am going to tranfer Accounts, Contact and Opptys from an external CRM (same structure and dependencies as in SFDC). Accounts are my dependency-less objects and I am moving over 10.000 accounts with related Contacts and Opptys. As I understand your code the batch runs on generic jobs where I triple consist of Account, Contact and Opptys. Could you assume its moving from one org to the other and add some code to showcase what happens where?! Or do you have a Github repo for this ?

      – Robert Sösemann
      5 hours ago








    • 1





      @RobertSösemann Each interface-implementing class would perform the appropriate callouts (don't forget Database.AllowsCallouts), then insert/update/upsert whatever based on criteria, then go on the net step. I suppose I could write in a more comprehensive edit if you'd like.

      – sfdcfox
      5 hours ago











    • I’d spend you a beer or two next Dreamforce. Maybe in a github gist?!

      – Robert Sösemann
      3 hours ago



















    • Awesome! As always!

      – Robert Sösemann
      11 hours ago











    • Let me try that out. I leave the question open as I am sure I will have some follow up questions.

      – Robert Sösemann
      11 hours ago











    • Sorry, maybe it's already a bit late but I still don't get it. Imagine I am going to tranfer Accounts, Contact and Opptys from an external CRM (same structure and dependencies as in SFDC). Accounts are my dependency-less objects and I am moving over 10.000 accounts with related Contacts and Opptys. As I understand your code the batch runs on generic jobs where I triple consist of Account, Contact and Opptys. Could you assume its moving from one org to the other and add some code to showcase what happens where?! Or do you have a Github repo for this ?

      – Robert Sösemann
      5 hours ago








    • 1





      @RobertSösemann Each interface-implementing class would perform the appropriate callouts (don't forget Database.AllowsCallouts), then insert/update/upsert whatever based on criteria, then go on the net step. I suppose I could write in a more comprehensive edit if you'd like.

      – sfdcfox
      5 hours ago











    • I’d spend you a beer or two next Dreamforce. Maybe in a github gist?!

      – Robert Sösemann
      3 hours ago

















    Awesome! As always!

    – Robert Sösemann
    11 hours ago





    Awesome! As always!

    – Robert Sösemann
    11 hours ago













    Let me try that out. I leave the question open as I am sure I will have some follow up questions.

    – Robert Sösemann
    11 hours ago





    Let me try that out. I leave the question open as I am sure I will have some follow up questions.

    – Robert Sösemann
    11 hours ago













    Sorry, maybe it's already a bit late but I still don't get it. Imagine I am going to tranfer Accounts, Contact and Opptys from an external CRM (same structure and dependencies as in SFDC). Accounts are my dependency-less objects and I am moving over 10.000 accounts with related Contacts and Opptys. As I understand your code the batch runs on generic jobs where I triple consist of Account, Contact and Opptys. Could you assume its moving from one org to the other and add some code to showcase what happens where?! Or do you have a Github repo for this ?

    – Robert Sösemann
    5 hours ago







    Sorry, maybe it's already a bit late but I still don't get it. Imagine I am going to tranfer Accounts, Contact and Opptys from an external CRM (same structure and dependencies as in SFDC). Accounts are my dependency-less objects and I am moving over 10.000 accounts with related Contacts and Opptys. As I understand your code the batch runs on generic jobs where I triple consist of Account, Contact and Opptys. Could you assume its moving from one org to the other and add some code to showcase what happens where?! Or do you have a Github repo for this ?

    – Robert Sösemann
    5 hours ago






    1




    1





    @RobertSösemann Each interface-implementing class would perform the appropriate callouts (don't forget Database.AllowsCallouts), then insert/update/upsert whatever based on criteria, then go on the net step. I suppose I could write in a more comprehensive edit if you'd like.

    – sfdcfox
    5 hours ago





    @RobertSösemann Each interface-implementing class would perform the appropriate callouts (don't forget Database.AllowsCallouts), then insert/update/upsert whatever based on criteria, then go on the net step. I suppose I could write in a more comprehensive edit if you'd like.

    – sfdcfox
    5 hours ago













    I’d spend you a beer or two next Dreamforce. Maybe in a github gist?!

    – Robert Sösemann
    3 hours ago





    I’d spend you a beer or two next Dreamforce. Maybe in a github gist?!

    – Robert Sösemann
    3 hours ago













    1














    If you run a Batchable from Account, the execute method could query all the necessary child records and act accordingly. If the transaction limits would get busted there, spawn a Queueable from the batch. You won't reach maximum stack depth unless any one Queueable launched by the batch passes 5 deep.






    share|improve this answer




























      1














      If you run a Batchable from Account, the execute method could query all the necessary child records and act accordingly. If the transaction limits would get busted there, spawn a Queueable from the batch. You won't reach maximum stack depth unless any one Queueable launched by the batch passes 5 deep.






      share|improve this answer


























        1












        1








        1







        If you run a Batchable from Account, the execute method could query all the necessary child records and act accordingly. If the transaction limits would get busted there, spawn a Queueable from the batch. You won't reach maximum stack depth unless any one Queueable launched by the batch passes 5 deep.






        share|improve this answer













        If you run a Batchable from Account, the execute method could query all the necessary child records and act accordingly. If the transaction limits would get busted there, spawn a Queueable from the batch. You won't reach maximum stack depth unless any one Queueable launched by the batch passes 5 deep.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered 12 hours ago









        Charles TCharles T

        6,5961923




        6,5961923























            1














            Well, I have a hack



            We cant call future from future, but we can call Future from a Queueable and Queueable from future.



            So from the 5th Queuable call the future, and then that Future can call another Queueable to have an infinite chain in Developer orgs.



            Edit: I did a demo of recursion, calling Future from Queuable and from Queuable call future, I was able to chain over 400+ levels deep before my Daily Async Apex Limit ended.






            share|improve this answer


























            • Aren't there similar limitations regarding futures in Dev Orgs?

              – Robert Sösemann
              12 hours ago






            • 1





              We cant call future from future, so thats there, You can call 1 Queuable from Future. Thats what you need to restart your chain. No more than 0 in batch and future contexts; 1 in queueable context method calls per Apex invocation

              – Pranay Jaiswal
              12 hours ago











            • Sounds great. What's the drawback. Or where does this start to be hacky?

              – Robert Sösemann
              12 hours ago






            • 1





              The future method does not return enqueued job id, thus you lose track of what's going on. I cant think of anything else at the moment. I made an engine which would go 8 level deep for some data import work like you, didnt disappoint me.

              – Pranay Jaiswal
              12 hours ago
















            1














            Well, I have a hack



            We cant call future from future, but we can call Future from a Queueable and Queueable from future.



            So from the 5th Queuable call the future, and then that Future can call another Queueable to have an infinite chain in Developer orgs.



            Edit: I did a demo of recursion, calling Future from Queuable and from Queuable call future, I was able to chain over 400+ levels deep before my Daily Async Apex Limit ended.






            share|improve this answer


























            • Aren't there similar limitations regarding futures in Dev Orgs?

              – Robert Sösemann
              12 hours ago






            • 1





              We cant call future from future, so thats there, You can call 1 Queuable from Future. Thats what you need to restart your chain. No more than 0 in batch and future contexts; 1 in queueable context method calls per Apex invocation

              – Pranay Jaiswal
              12 hours ago











            • Sounds great. What's the drawback. Or where does this start to be hacky?

              – Robert Sösemann
              12 hours ago






            • 1





              The future method does not return enqueued job id, thus you lose track of what's going on. I cant think of anything else at the moment. I made an engine which would go 8 level deep for some data import work like you, didnt disappoint me.

              – Pranay Jaiswal
              12 hours ago














            1












            1








            1







            Well, I have a hack



            We cant call future from future, but we can call Future from a Queueable and Queueable from future.



            So from the 5th Queuable call the future, and then that Future can call another Queueable to have an infinite chain in Developer orgs.



            Edit: I did a demo of recursion, calling Future from Queuable and from Queuable call future, I was able to chain over 400+ levels deep before my Daily Async Apex Limit ended.






            share|improve this answer















            Well, I have a hack



            We cant call future from future, but we can call Future from a Queueable and Queueable from future.



            So from the 5th Queuable call the future, and then that Future can call another Queueable to have an infinite chain in Developer orgs.



            Edit: I did a demo of recursion, calling Future from Queuable and from Queuable call future, I was able to chain over 400+ levels deep before my Daily Async Apex Limit ended.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited 12 hours ago

























            answered 12 hours ago









            Pranay JaiswalPranay Jaiswal

            17.3k42855




            17.3k42855













            • Aren't there similar limitations regarding futures in Dev Orgs?

              – Robert Sösemann
              12 hours ago






            • 1





              We cant call future from future, so thats there, You can call 1 Queuable from Future. Thats what you need to restart your chain. No more than 0 in batch and future contexts; 1 in queueable context method calls per Apex invocation

              – Pranay Jaiswal
              12 hours ago











            • Sounds great. What's the drawback. Or where does this start to be hacky?

              – Robert Sösemann
              12 hours ago






            • 1





              The future method does not return enqueued job id, thus you lose track of what's going on. I cant think of anything else at the moment. I made an engine which would go 8 level deep for some data import work like you, didnt disappoint me.

              – Pranay Jaiswal
              12 hours ago



















            • Aren't there similar limitations regarding futures in Dev Orgs?

              – Robert Sösemann
              12 hours ago






            • 1





              We cant call future from future, so thats there, You can call 1 Queuable from Future. Thats what you need to restart your chain. No more than 0 in batch and future contexts; 1 in queueable context method calls per Apex invocation

              – Pranay Jaiswal
              12 hours ago











            • Sounds great. What's the drawback. Or where does this start to be hacky?

              – Robert Sösemann
              12 hours ago






            • 1





              The future method does not return enqueued job id, thus you lose track of what's going on. I cant think of anything else at the moment. I made an engine which would go 8 level deep for some data import work like you, didnt disappoint me.

              – Pranay Jaiswal
              12 hours ago

















            Aren't there similar limitations regarding futures in Dev Orgs?

            – Robert Sösemann
            12 hours ago





            Aren't there similar limitations regarding futures in Dev Orgs?

            – Robert Sösemann
            12 hours ago




            1




            1





            We cant call future from future, so thats there, You can call 1 Queuable from Future. Thats what you need to restart your chain. No more than 0 in batch and future contexts; 1 in queueable context method calls per Apex invocation

            – Pranay Jaiswal
            12 hours ago





            We cant call future from future, so thats there, You can call 1 Queuable from Future. Thats what you need to restart your chain. No more than 0 in batch and future contexts; 1 in queueable context method calls per Apex invocation

            – Pranay Jaiswal
            12 hours ago













            Sounds great. What's the drawback. Or where does this start to be hacky?

            – Robert Sösemann
            12 hours ago





            Sounds great. What's the drawback. Or where does this start to be hacky?

            – Robert Sösemann
            12 hours ago




            1




            1





            The future method does not return enqueued job id, thus you lose track of what's going on. I cant think of anything else at the moment. I made an engine which would go 8 level deep for some data import work like you, didnt disappoint me.

            – Pranay Jaiswal
            12 hours ago





            The future method does not return enqueued job id, thus you lose track of what's going on. I cant think of anything else at the moment. I made an engine which would go 8 level deep for some data import work like you, didnt disappoint me.

            – Pranay Jaiswal
            12 hours ago


















            draft saved

            draft discarded




















































            Thanks for contributing an answer to Salesforce Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f253134%2fovercome-queueable-maximum-depth-limit-in-dev-orgs%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            If I really need a card on my start hand, how many mulligans make sense? [duplicate]

            Alcedinidae

            Can an atomic nucleus contain both particles and antiparticles? [duplicate]