Query Limit on Batchable Class











up vote
1
down vote

favorite












What is the best practice with governor limits with Batchable classes and queries inside of the Batchable class?



I am running into query limits with a Batchable class because I have too many AggregateResult queries. I have a query that returns 100 results and I need to run 3 SUM() queries on each one of those results. I add each record into a list and do one update at the end. This will be called from a scheduler twice a day so I don't want to have to limit the returned results.



I am not sure how to handle this. Does anyone have any suggestions?



global class PaymentAgreementMonitor implements Database.Batchable<sObject> {

global Database.QueryLocator start(Database.BatchableContext BC){
//Query for all Payment Agreements where Status = Active
String status = 'Active';
String Query = 'SELECT Id, Balance_Paid__c, Total_Expected_Balance_Today__c, Ownership__c,Recovery__c,Start_Date__c,Matter__c,' +
'Contingency_Payout_Method__c, Balance_Paid_Last_Updated__c FROM Payment_Agreement__c WHERE Status__c ='' + status + ''';

return Database.getQueryLocator(query);
}

global void execute(Database.BatchableContext BC, List<Payment_Agreement__c> scope){
system.debug('scope:: ' + scope);
scheduler(scope);
}

global void finish(Database.BatchableContext BC){
system.debug('finished batchable PaymentAgreementSchedule');
}

public static void scheduler(List<Payment_Agreement__c> payment_agreements){
List<Id>paListError = new List<Id>();
List<Payment_Agreement__c> paList = new List<Payment_Agreement__c>();
for (Payment_Agreement__c pa : payment_agreements) {
for(AggregateResult schPay : [SELECT SUM(Amount__c)sumAmt
FROM Scheduled_Payment__c
WHERE Payment_Agreement__c = :pa.Id
AND Scheduled_Date__c <=TODAY AND Amount__c != NULL]){
Decimal decimalAmount = (Decimal)schPay.get('sumAmt');
if(decimalAmount !=null) {
pa.Total_Expected_Balance_Today__c = math.abs(decimalAmount); }
}

Decimal decimalAmount;
for(AggregateResult schPay : [SELECT SUM(Amount__c)sumAmt
FROM Recovery_Payment__c
WHERE Recovery__c = :pa.Recovery__c
AND Date__c >= :pa.Start_Date__c]) {
decimalAmount = (Decimal)schPay.get('sumAmt');
}

//Calculate and insert the value for current Balance_Paid__c
if (pa.Ownership__c == 'In House') {
if(decimalAmount != null) {
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(decimalAmount);
}
} else if (pa.Ownership__c == 'Local Counsel' && pa.Contingency_Payout_Method__c == 'Contingency Paid to Local Counsel') {
if(decimalAmount != null) {
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(decimalAmount);
system.debug('Payment Agreement Contigency Paiid to Local Counsel -Balance_Paid__c:: ' + pa.Balance_Paid__c);
}
} else if (pa.Ownership__c == 'Local Counsel' && pa.Contingency_Payout_Method__c == 'Net Paid to National Funding') {
Decimal sumAverageRecoveryPaymentsLocalNF = 0;
Decimal sumAverageExpenseAmount = 0;
if(decimalAmount != null) {
sumAverageRecoveryPaymentsLocalNF = decimalAmount ;
}

for(AggregateResult schPay : [SELECT SUM(advpm__Expense_Amount__c)sumExp
FROM advpm__Expense__c
WHERE advpm__Matter__c = :pa.Matter__c
AND advpm__Category__c = 'Contingency Fees']){
Decimal sumExpenseDecimal = (Decimal)schPay.get('sumExp');
if(decimalAmount != null){
sumAverageExpenseAmount = math.abs(sumExpenseDecimal);
}
}
Decimal sumRecoveryAndExpenses = sumAverageExpenseAmount + sumAverageRecoveryPaymentsLocalNF;
if(sumRecoveryAndExpenses != 0){
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(sumRecoveryAndExpenses);
}
} else{
pa.Total_Expected_Balance_Today__c = null;
pa.Balance_Paid_Last_Updated__c = null;
paListError.add(pa.Id);
}
paList.add(pa);
}
if(paListError.size() > 0 ){
NFLogger.logError('PaymentAgreementMonitor.cls', 'Payment Agreement record(s): ' + paListError + ' have incorrect data in Ownership__c or Contingency_Payout_Method__c fields.');
}
update paList;
}
}


UPDATES:
Updated Code from Suggestions:



private static void scheduler(List<Payment_Agreement__c> payment_agreements){
....
Map<Id, AggregateResult> scheduledPayments = new Map<Id, AggregateResult>([
SELECT SUM(Amount__c) amount, Payment_Agreement__c Id, Payment_Agreement__r.Recovery__c rec, Payment_Agreement__r.Start_Date__c stDate, Payment_Agreement__r.Matter__c mttr
FROM Scheduled_Payment__c
WHERE Payment_Agreement__c IN :payment_agreements
AND Scheduled_Date__c <=TODAY AND Amount__c != NULL
GROUP BY Payment_Agreement__c
])

Map<Id, AggregateResult> recoveryPayments = new Map<Id, AggregateResult>([
SELECT SUM(Amount__c)sumAmt
FROM Recovery_Payment__c
WHERE Recovery__c IN :scheduledPayments.rec
AND Date__c >= :scheduledPayments.stDate]
);

Map<Id, AggregateResult> expenseAmount = new Map<Id, AggregateResult>([
SELECT SUM(advpm__Expense_Amount__c)sumExp
FROM advpm__Expense__c
WHERE advpm__Matter__c = :scheduledPayments.mttr
AND advpm__Category__c = 'Contingency Fees']
);

for (Payment_Agreement__c pa : payment_agreements) {
...









share|improve this question
























  • Can the recovery Date be ever less that start date of Payment_Agreement__c?
    – Pranay Jaiswal
    Dec 6 at 22:32















up vote
1
down vote

favorite












What is the best practice with governor limits with Batchable classes and queries inside of the Batchable class?



I am running into query limits with a Batchable class because I have too many AggregateResult queries. I have a query that returns 100 results and I need to run 3 SUM() queries on each one of those results. I add each record into a list and do one update at the end. This will be called from a scheduler twice a day so I don't want to have to limit the returned results.



I am not sure how to handle this. Does anyone have any suggestions?



global class PaymentAgreementMonitor implements Database.Batchable<sObject> {

global Database.QueryLocator start(Database.BatchableContext BC){
//Query for all Payment Agreements where Status = Active
String status = 'Active';
String Query = 'SELECT Id, Balance_Paid__c, Total_Expected_Balance_Today__c, Ownership__c,Recovery__c,Start_Date__c,Matter__c,' +
'Contingency_Payout_Method__c, Balance_Paid_Last_Updated__c FROM Payment_Agreement__c WHERE Status__c ='' + status + ''';

return Database.getQueryLocator(query);
}

global void execute(Database.BatchableContext BC, List<Payment_Agreement__c> scope){
system.debug('scope:: ' + scope);
scheduler(scope);
}

global void finish(Database.BatchableContext BC){
system.debug('finished batchable PaymentAgreementSchedule');
}

public static void scheduler(List<Payment_Agreement__c> payment_agreements){
List<Id>paListError = new List<Id>();
List<Payment_Agreement__c> paList = new List<Payment_Agreement__c>();
for (Payment_Agreement__c pa : payment_agreements) {
for(AggregateResult schPay : [SELECT SUM(Amount__c)sumAmt
FROM Scheduled_Payment__c
WHERE Payment_Agreement__c = :pa.Id
AND Scheduled_Date__c <=TODAY AND Amount__c != NULL]){
Decimal decimalAmount = (Decimal)schPay.get('sumAmt');
if(decimalAmount !=null) {
pa.Total_Expected_Balance_Today__c = math.abs(decimalAmount); }
}

Decimal decimalAmount;
for(AggregateResult schPay : [SELECT SUM(Amount__c)sumAmt
FROM Recovery_Payment__c
WHERE Recovery__c = :pa.Recovery__c
AND Date__c >= :pa.Start_Date__c]) {
decimalAmount = (Decimal)schPay.get('sumAmt');
}

//Calculate and insert the value for current Balance_Paid__c
if (pa.Ownership__c == 'In House') {
if(decimalAmount != null) {
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(decimalAmount);
}
} else if (pa.Ownership__c == 'Local Counsel' && pa.Contingency_Payout_Method__c == 'Contingency Paid to Local Counsel') {
if(decimalAmount != null) {
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(decimalAmount);
system.debug('Payment Agreement Contigency Paiid to Local Counsel -Balance_Paid__c:: ' + pa.Balance_Paid__c);
}
} else if (pa.Ownership__c == 'Local Counsel' && pa.Contingency_Payout_Method__c == 'Net Paid to National Funding') {
Decimal sumAverageRecoveryPaymentsLocalNF = 0;
Decimal sumAverageExpenseAmount = 0;
if(decimalAmount != null) {
sumAverageRecoveryPaymentsLocalNF = decimalAmount ;
}

for(AggregateResult schPay : [SELECT SUM(advpm__Expense_Amount__c)sumExp
FROM advpm__Expense__c
WHERE advpm__Matter__c = :pa.Matter__c
AND advpm__Category__c = 'Contingency Fees']){
Decimal sumExpenseDecimal = (Decimal)schPay.get('sumExp');
if(decimalAmount != null){
sumAverageExpenseAmount = math.abs(sumExpenseDecimal);
}
}
Decimal sumRecoveryAndExpenses = sumAverageExpenseAmount + sumAverageRecoveryPaymentsLocalNF;
if(sumRecoveryAndExpenses != 0){
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(sumRecoveryAndExpenses);
}
} else{
pa.Total_Expected_Balance_Today__c = null;
pa.Balance_Paid_Last_Updated__c = null;
paListError.add(pa.Id);
}
paList.add(pa);
}
if(paListError.size() > 0 ){
NFLogger.logError('PaymentAgreementMonitor.cls', 'Payment Agreement record(s): ' + paListError + ' have incorrect data in Ownership__c or Contingency_Payout_Method__c fields.');
}
update paList;
}
}


UPDATES:
Updated Code from Suggestions:



private static void scheduler(List<Payment_Agreement__c> payment_agreements){
....
Map<Id, AggregateResult> scheduledPayments = new Map<Id, AggregateResult>([
SELECT SUM(Amount__c) amount, Payment_Agreement__c Id, Payment_Agreement__r.Recovery__c rec, Payment_Agreement__r.Start_Date__c stDate, Payment_Agreement__r.Matter__c mttr
FROM Scheduled_Payment__c
WHERE Payment_Agreement__c IN :payment_agreements
AND Scheduled_Date__c <=TODAY AND Amount__c != NULL
GROUP BY Payment_Agreement__c
])

Map<Id, AggregateResult> recoveryPayments = new Map<Id, AggregateResult>([
SELECT SUM(Amount__c)sumAmt
FROM Recovery_Payment__c
WHERE Recovery__c IN :scheduledPayments.rec
AND Date__c >= :scheduledPayments.stDate]
);

Map<Id, AggregateResult> expenseAmount = new Map<Id, AggregateResult>([
SELECT SUM(advpm__Expense_Amount__c)sumExp
FROM advpm__Expense__c
WHERE advpm__Matter__c = :scheduledPayments.mttr
AND advpm__Category__c = 'Contingency Fees']
);

for (Payment_Agreement__c pa : payment_agreements) {
...









share|improve this question
























  • Can the recovery Date be ever less that start date of Payment_Agreement__c?
    – Pranay Jaiswal
    Dec 6 at 22:32













up vote
1
down vote

favorite









up vote
1
down vote

favorite











What is the best practice with governor limits with Batchable classes and queries inside of the Batchable class?



I am running into query limits with a Batchable class because I have too many AggregateResult queries. I have a query that returns 100 results and I need to run 3 SUM() queries on each one of those results. I add each record into a list and do one update at the end. This will be called from a scheduler twice a day so I don't want to have to limit the returned results.



I am not sure how to handle this. Does anyone have any suggestions?



global class PaymentAgreementMonitor implements Database.Batchable<sObject> {

global Database.QueryLocator start(Database.BatchableContext BC){
//Query for all Payment Agreements where Status = Active
String status = 'Active';
String Query = 'SELECT Id, Balance_Paid__c, Total_Expected_Balance_Today__c, Ownership__c,Recovery__c,Start_Date__c,Matter__c,' +
'Contingency_Payout_Method__c, Balance_Paid_Last_Updated__c FROM Payment_Agreement__c WHERE Status__c ='' + status + ''';

return Database.getQueryLocator(query);
}

global void execute(Database.BatchableContext BC, List<Payment_Agreement__c> scope){
system.debug('scope:: ' + scope);
scheduler(scope);
}

global void finish(Database.BatchableContext BC){
system.debug('finished batchable PaymentAgreementSchedule');
}

public static void scheduler(List<Payment_Agreement__c> payment_agreements){
List<Id>paListError = new List<Id>();
List<Payment_Agreement__c> paList = new List<Payment_Agreement__c>();
for (Payment_Agreement__c pa : payment_agreements) {
for(AggregateResult schPay : [SELECT SUM(Amount__c)sumAmt
FROM Scheduled_Payment__c
WHERE Payment_Agreement__c = :pa.Id
AND Scheduled_Date__c <=TODAY AND Amount__c != NULL]){
Decimal decimalAmount = (Decimal)schPay.get('sumAmt');
if(decimalAmount !=null) {
pa.Total_Expected_Balance_Today__c = math.abs(decimalAmount); }
}

Decimal decimalAmount;
for(AggregateResult schPay : [SELECT SUM(Amount__c)sumAmt
FROM Recovery_Payment__c
WHERE Recovery__c = :pa.Recovery__c
AND Date__c >= :pa.Start_Date__c]) {
decimalAmount = (Decimal)schPay.get('sumAmt');
}

//Calculate and insert the value for current Balance_Paid__c
if (pa.Ownership__c == 'In House') {
if(decimalAmount != null) {
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(decimalAmount);
}
} else if (pa.Ownership__c == 'Local Counsel' && pa.Contingency_Payout_Method__c == 'Contingency Paid to Local Counsel') {
if(decimalAmount != null) {
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(decimalAmount);
system.debug('Payment Agreement Contigency Paiid to Local Counsel -Balance_Paid__c:: ' + pa.Balance_Paid__c);
}
} else if (pa.Ownership__c == 'Local Counsel' && pa.Contingency_Payout_Method__c == 'Net Paid to National Funding') {
Decimal sumAverageRecoveryPaymentsLocalNF = 0;
Decimal sumAverageExpenseAmount = 0;
if(decimalAmount != null) {
sumAverageRecoveryPaymentsLocalNF = decimalAmount ;
}

for(AggregateResult schPay : [SELECT SUM(advpm__Expense_Amount__c)sumExp
FROM advpm__Expense__c
WHERE advpm__Matter__c = :pa.Matter__c
AND advpm__Category__c = 'Contingency Fees']){
Decimal sumExpenseDecimal = (Decimal)schPay.get('sumExp');
if(decimalAmount != null){
sumAverageExpenseAmount = math.abs(sumExpenseDecimal);
}
}
Decimal sumRecoveryAndExpenses = sumAverageExpenseAmount + sumAverageRecoveryPaymentsLocalNF;
if(sumRecoveryAndExpenses != 0){
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(sumRecoveryAndExpenses);
}
} else{
pa.Total_Expected_Balance_Today__c = null;
pa.Balance_Paid_Last_Updated__c = null;
paListError.add(pa.Id);
}
paList.add(pa);
}
if(paListError.size() > 0 ){
NFLogger.logError('PaymentAgreementMonitor.cls', 'Payment Agreement record(s): ' + paListError + ' have incorrect data in Ownership__c or Contingency_Payout_Method__c fields.');
}
update paList;
}
}


UPDATES:
Updated Code from Suggestions:



private static void scheduler(List<Payment_Agreement__c> payment_agreements){
....
Map<Id, AggregateResult> scheduledPayments = new Map<Id, AggregateResult>([
SELECT SUM(Amount__c) amount, Payment_Agreement__c Id, Payment_Agreement__r.Recovery__c rec, Payment_Agreement__r.Start_Date__c stDate, Payment_Agreement__r.Matter__c mttr
FROM Scheduled_Payment__c
WHERE Payment_Agreement__c IN :payment_agreements
AND Scheduled_Date__c <=TODAY AND Amount__c != NULL
GROUP BY Payment_Agreement__c
])

Map<Id, AggregateResult> recoveryPayments = new Map<Id, AggregateResult>([
SELECT SUM(Amount__c)sumAmt
FROM Recovery_Payment__c
WHERE Recovery__c IN :scheduledPayments.rec
AND Date__c >= :scheduledPayments.stDate]
);

Map<Id, AggregateResult> expenseAmount = new Map<Id, AggregateResult>([
SELECT SUM(advpm__Expense_Amount__c)sumExp
FROM advpm__Expense__c
WHERE advpm__Matter__c = :scheduledPayments.mttr
AND advpm__Category__c = 'Contingency Fees']
);

for (Payment_Agreement__c pa : payment_agreements) {
...









share|improve this question















What is the best practice with governor limits with Batchable classes and queries inside of the Batchable class?



I am running into query limits with a Batchable class because I have too many AggregateResult queries. I have a query that returns 100 results and I need to run 3 SUM() queries on each one of those results. I add each record into a list and do one update at the end. This will be called from a scheduler twice a day so I don't want to have to limit the returned results.



I am not sure how to handle this. Does anyone have any suggestions?



global class PaymentAgreementMonitor implements Database.Batchable<sObject> {

global Database.QueryLocator start(Database.BatchableContext BC){
//Query for all Payment Agreements where Status = Active
String status = 'Active';
String Query = 'SELECT Id, Balance_Paid__c, Total_Expected_Balance_Today__c, Ownership__c,Recovery__c,Start_Date__c,Matter__c,' +
'Contingency_Payout_Method__c, Balance_Paid_Last_Updated__c FROM Payment_Agreement__c WHERE Status__c ='' + status + ''';

return Database.getQueryLocator(query);
}

global void execute(Database.BatchableContext BC, List<Payment_Agreement__c> scope){
system.debug('scope:: ' + scope);
scheduler(scope);
}

global void finish(Database.BatchableContext BC){
system.debug('finished batchable PaymentAgreementSchedule');
}

public static void scheduler(List<Payment_Agreement__c> payment_agreements){
List<Id>paListError = new List<Id>();
List<Payment_Agreement__c> paList = new List<Payment_Agreement__c>();
for (Payment_Agreement__c pa : payment_agreements) {
for(AggregateResult schPay : [SELECT SUM(Amount__c)sumAmt
FROM Scheduled_Payment__c
WHERE Payment_Agreement__c = :pa.Id
AND Scheduled_Date__c <=TODAY AND Amount__c != NULL]){
Decimal decimalAmount = (Decimal)schPay.get('sumAmt');
if(decimalAmount !=null) {
pa.Total_Expected_Balance_Today__c = math.abs(decimalAmount); }
}

Decimal decimalAmount;
for(AggregateResult schPay : [SELECT SUM(Amount__c)sumAmt
FROM Recovery_Payment__c
WHERE Recovery__c = :pa.Recovery__c
AND Date__c >= :pa.Start_Date__c]) {
decimalAmount = (Decimal)schPay.get('sumAmt');
}

//Calculate and insert the value for current Balance_Paid__c
if (pa.Ownership__c == 'In House') {
if(decimalAmount != null) {
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(decimalAmount);
}
} else if (pa.Ownership__c == 'Local Counsel' && pa.Contingency_Payout_Method__c == 'Contingency Paid to Local Counsel') {
if(decimalAmount != null) {
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(decimalAmount);
system.debug('Payment Agreement Contigency Paiid to Local Counsel -Balance_Paid__c:: ' + pa.Balance_Paid__c);
}
} else if (pa.Ownership__c == 'Local Counsel' && pa.Contingency_Payout_Method__c == 'Net Paid to National Funding') {
Decimal sumAverageRecoveryPaymentsLocalNF = 0;
Decimal sumAverageExpenseAmount = 0;
if(decimalAmount != null) {
sumAverageRecoveryPaymentsLocalNF = decimalAmount ;
}

for(AggregateResult schPay : [SELECT SUM(advpm__Expense_Amount__c)sumExp
FROM advpm__Expense__c
WHERE advpm__Matter__c = :pa.Matter__c
AND advpm__Category__c = 'Contingency Fees']){
Decimal sumExpenseDecimal = (Decimal)schPay.get('sumExp');
if(decimalAmount != null){
sumAverageExpenseAmount = math.abs(sumExpenseDecimal);
}
}
Decimal sumRecoveryAndExpenses = sumAverageExpenseAmount + sumAverageRecoveryPaymentsLocalNF;
if(sumRecoveryAndExpenses != 0){
pa.Balance_Paid_Last_Updated__c = DateTime.now();
pa.Balance_Paid__c = math.abs(sumRecoveryAndExpenses);
}
} else{
pa.Total_Expected_Balance_Today__c = null;
pa.Balance_Paid_Last_Updated__c = null;
paListError.add(pa.Id);
}
paList.add(pa);
}
if(paListError.size() > 0 ){
NFLogger.logError('PaymentAgreementMonitor.cls', 'Payment Agreement record(s): ' + paListError + ' have incorrect data in Ownership__c or Contingency_Payout_Method__c fields.');
}
update paList;
}
}


UPDATES:
Updated Code from Suggestions:



private static void scheduler(List<Payment_Agreement__c> payment_agreements){
....
Map<Id, AggregateResult> scheduledPayments = new Map<Id, AggregateResult>([
SELECT SUM(Amount__c) amount, Payment_Agreement__c Id, Payment_Agreement__r.Recovery__c rec, Payment_Agreement__r.Start_Date__c stDate, Payment_Agreement__r.Matter__c mttr
FROM Scheduled_Payment__c
WHERE Payment_Agreement__c IN :payment_agreements
AND Scheduled_Date__c <=TODAY AND Amount__c != NULL
GROUP BY Payment_Agreement__c
])

Map<Id, AggregateResult> recoveryPayments = new Map<Id, AggregateResult>([
SELECT SUM(Amount__c)sumAmt
FROM Recovery_Payment__c
WHERE Recovery__c IN :scheduledPayments.rec
AND Date__c >= :scheduledPayments.stDate]
);

Map<Id, AggregateResult> expenseAmount = new Map<Id, AggregateResult>([
SELECT SUM(advpm__Expense_Amount__c)sumExp
FROM advpm__Expense__c
WHERE advpm__Matter__c = :scheduledPayments.mttr
AND advpm__Category__c = 'Contingency Fees']
);

for (Payment_Agreement__c pa : payment_agreements) {
...






apex batch scheduled-apex governorlimits schedulebatch






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 7 at 18:13

























asked Dec 6 at 22:12









Olivia

1,264421




1,264421












  • Can the recovery Date be ever less that start date of Payment_Agreement__c?
    – Pranay Jaiswal
    Dec 6 at 22:32


















  • Can the recovery Date be ever less that start date of Payment_Agreement__c?
    – Pranay Jaiswal
    Dec 6 at 22:32
















Can the recovery Date be ever less that start date of Payment_Agreement__c?
– Pranay Jaiswal
Dec 6 at 22:32




Can the recovery Date be ever less that start date of Payment_Agreement__c?
– Pranay Jaiswal
Dec 6 at 22:32










2 Answers
2






active

oldest

votes

















up vote
4
down vote













You do not need to query these aggregates within your loops. It's not really any different than most other queries, with one notable exception: with aggregate queries you can use field alias functionality to use the built in Map<Id, SObject>(List<SObject>) constructor.



Map<Id, AggregateResult> scheduledPayments = new Map<Id, AggregateResult([
SELECT SUM(Amount__c) amount, Payment_Agreement__c Id
FROM Scheduled_Payment__c
WHERE Payment_Agreement__c IN :payment_agreements
AND Scheduled_Date__c <=TODAY AND Amount__c != NULL
GROUP BY Payment_Agreement__c
]);
for (Payment_Agreement__c record : payment_agreements)
{
Decimal amount;
AggregateResult scheduledPayment = scheduledPayments.get(record.Id);
if (scheduledPayment != null)
{
amount = (Decimal)scheduledPayment.get('amount');
}
if (amount != null)
{
// further logic here
}
}


The process will be quite similar for your other aggregations, with the exception of your Start_Date__c filtering, which will involve a bit more thought. You may want to look at pre-computing that value as a rollup via trigger.






share|improve this answer





















  • this is a good idea but I am concerned that if I bring in 200 records, then I can only have one query in the entire method. Would a better route be to write some handling with getQueries() and if we are at 199 then I would add the rest of the record Ids to a list, exit the method, and some how run the process again on the list? I am not entirely sure I will execute this, but something along those lines.
    – Olivia
    Dec 6 at 23:07










  • You're over thinking it. For the queries where you filter on Id, you can run one query to get all of the data.
    – Adrian Larson
    Dec 6 at 23:23










  • I am not entirely sure how I would achieve that with my query on advpm__Expense__c and Recovery_Payment__c as I am not just querying on ID.
    – Olivia
    Dec 6 at 23:29










  • @Olivia You can group by and filter by multiple filters. AggregateResult example, as shown by Adrian here, means you can take the queries out of the loops you have. Also, you need an update map so you can associate the values with the records correctly. More maps, less lists.
    – sfdcfox
    Dec 7 at 2:40










  • sorry I am still not understanding how this is possible. Not all of the objects have a direct relationship. Are you suggesting I do three nested maps to be able to gather bulk data for the four objects? Even if I Grouped By and Filtered, some of the objects have nested relationships that are reliant on the Payment Agreement object. I have updated my post with what I think you were saying but it (obviously) is written incorrectly
    – Olivia
    Dec 7 at 18:09




















up vote
0
down vote













Rather then editing my code, I found that Salesforce recommends creating smaller batch calls when you run a Batchable class that will hit limits.




An optional parameter scope. This parameter specifies the number of
records to pass into the execute method. Use this parameter when you
have many operations for each record being passed in and are running
into governor limits. By limiting the number of records, you are
limiting the operations per transaction. This value must be greater
than zero. If the start method of the batch class returns a
QueryLocator, the optional scope parameter of Database.executeBatch
can have a maximum value of 2,000. If set to a higher value,
Salesforce chunks the records returned by the QueryLocator into
smaller batches of up to 2,000 records. If the start method of the
batch class returns an iterable, the scope parameter value has no
upper limit. However, if you use a high number, you can run into other
limits.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_batch_interface.htm




Because I have three queries that need to be nested inside of a for loop, if there are 200 records the result will be a governor limit being hit. To resolve this I need to schedule each batch in increments of 66 records at a time.



This is a really great resource for Aggregate queries: https://blog.jeffdouglas.com/2010/02/22/soql-how-i-query-with-thee-let-me-count-the-ways/






share|improve this answer





















    Your Answer








    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "459"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f241720%2fquery-limit-on-batchable-class%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    4
    down vote













    You do not need to query these aggregates within your loops. It's not really any different than most other queries, with one notable exception: with aggregate queries you can use field alias functionality to use the built in Map<Id, SObject>(List<SObject>) constructor.



    Map<Id, AggregateResult> scheduledPayments = new Map<Id, AggregateResult([
    SELECT SUM(Amount__c) amount, Payment_Agreement__c Id
    FROM Scheduled_Payment__c
    WHERE Payment_Agreement__c IN :payment_agreements
    AND Scheduled_Date__c <=TODAY AND Amount__c != NULL
    GROUP BY Payment_Agreement__c
    ]);
    for (Payment_Agreement__c record : payment_agreements)
    {
    Decimal amount;
    AggregateResult scheduledPayment = scheduledPayments.get(record.Id);
    if (scheduledPayment != null)
    {
    amount = (Decimal)scheduledPayment.get('amount');
    }
    if (amount != null)
    {
    // further logic here
    }
    }


    The process will be quite similar for your other aggregations, with the exception of your Start_Date__c filtering, which will involve a bit more thought. You may want to look at pre-computing that value as a rollup via trigger.






    share|improve this answer





















    • this is a good idea but I am concerned that if I bring in 200 records, then I can only have one query in the entire method. Would a better route be to write some handling with getQueries() and if we are at 199 then I would add the rest of the record Ids to a list, exit the method, and some how run the process again on the list? I am not entirely sure I will execute this, but something along those lines.
      – Olivia
      Dec 6 at 23:07










    • You're over thinking it. For the queries where you filter on Id, you can run one query to get all of the data.
      – Adrian Larson
      Dec 6 at 23:23










    • I am not entirely sure how I would achieve that with my query on advpm__Expense__c and Recovery_Payment__c as I am not just querying on ID.
      – Olivia
      Dec 6 at 23:29










    • @Olivia You can group by and filter by multiple filters. AggregateResult example, as shown by Adrian here, means you can take the queries out of the loops you have. Also, you need an update map so you can associate the values with the records correctly. More maps, less lists.
      – sfdcfox
      Dec 7 at 2:40










    • sorry I am still not understanding how this is possible. Not all of the objects have a direct relationship. Are you suggesting I do three nested maps to be able to gather bulk data for the four objects? Even if I Grouped By and Filtered, some of the objects have nested relationships that are reliant on the Payment Agreement object. I have updated my post with what I think you were saying but it (obviously) is written incorrectly
      – Olivia
      Dec 7 at 18:09

















    up vote
    4
    down vote













    You do not need to query these aggregates within your loops. It's not really any different than most other queries, with one notable exception: with aggregate queries you can use field alias functionality to use the built in Map<Id, SObject>(List<SObject>) constructor.



    Map<Id, AggregateResult> scheduledPayments = new Map<Id, AggregateResult([
    SELECT SUM(Amount__c) amount, Payment_Agreement__c Id
    FROM Scheduled_Payment__c
    WHERE Payment_Agreement__c IN :payment_agreements
    AND Scheduled_Date__c <=TODAY AND Amount__c != NULL
    GROUP BY Payment_Agreement__c
    ]);
    for (Payment_Agreement__c record : payment_agreements)
    {
    Decimal amount;
    AggregateResult scheduledPayment = scheduledPayments.get(record.Id);
    if (scheduledPayment != null)
    {
    amount = (Decimal)scheduledPayment.get('amount');
    }
    if (amount != null)
    {
    // further logic here
    }
    }


    The process will be quite similar for your other aggregations, with the exception of your Start_Date__c filtering, which will involve a bit more thought. You may want to look at pre-computing that value as a rollup via trigger.






    share|improve this answer





















    • this is a good idea but I am concerned that if I bring in 200 records, then I can only have one query in the entire method. Would a better route be to write some handling with getQueries() and if we are at 199 then I would add the rest of the record Ids to a list, exit the method, and some how run the process again on the list? I am not entirely sure I will execute this, but something along those lines.
      – Olivia
      Dec 6 at 23:07










    • You're over thinking it. For the queries where you filter on Id, you can run one query to get all of the data.
      – Adrian Larson
      Dec 6 at 23:23










    • I am not entirely sure how I would achieve that with my query on advpm__Expense__c and Recovery_Payment__c as I am not just querying on ID.
      – Olivia
      Dec 6 at 23:29










    • @Olivia You can group by and filter by multiple filters. AggregateResult example, as shown by Adrian here, means you can take the queries out of the loops you have. Also, you need an update map so you can associate the values with the records correctly. More maps, less lists.
      – sfdcfox
      Dec 7 at 2:40










    • sorry I am still not understanding how this is possible. Not all of the objects have a direct relationship. Are you suggesting I do three nested maps to be able to gather bulk data for the four objects? Even if I Grouped By and Filtered, some of the objects have nested relationships that are reliant on the Payment Agreement object. I have updated my post with what I think you were saying but it (obviously) is written incorrectly
      – Olivia
      Dec 7 at 18:09















    up vote
    4
    down vote










    up vote
    4
    down vote









    You do not need to query these aggregates within your loops. It's not really any different than most other queries, with one notable exception: with aggregate queries you can use field alias functionality to use the built in Map<Id, SObject>(List<SObject>) constructor.



    Map<Id, AggregateResult> scheduledPayments = new Map<Id, AggregateResult([
    SELECT SUM(Amount__c) amount, Payment_Agreement__c Id
    FROM Scheduled_Payment__c
    WHERE Payment_Agreement__c IN :payment_agreements
    AND Scheduled_Date__c <=TODAY AND Amount__c != NULL
    GROUP BY Payment_Agreement__c
    ]);
    for (Payment_Agreement__c record : payment_agreements)
    {
    Decimal amount;
    AggregateResult scheduledPayment = scheduledPayments.get(record.Id);
    if (scheduledPayment != null)
    {
    amount = (Decimal)scheduledPayment.get('amount');
    }
    if (amount != null)
    {
    // further logic here
    }
    }


    The process will be quite similar for your other aggregations, with the exception of your Start_Date__c filtering, which will involve a bit more thought. You may want to look at pre-computing that value as a rollup via trigger.






    share|improve this answer












    You do not need to query these aggregates within your loops. It's not really any different than most other queries, with one notable exception: with aggregate queries you can use field alias functionality to use the built in Map<Id, SObject>(List<SObject>) constructor.



    Map<Id, AggregateResult> scheduledPayments = new Map<Id, AggregateResult([
    SELECT SUM(Amount__c) amount, Payment_Agreement__c Id
    FROM Scheduled_Payment__c
    WHERE Payment_Agreement__c IN :payment_agreements
    AND Scheduled_Date__c <=TODAY AND Amount__c != NULL
    GROUP BY Payment_Agreement__c
    ]);
    for (Payment_Agreement__c record : payment_agreements)
    {
    Decimal amount;
    AggregateResult scheduledPayment = scheduledPayments.get(record.Id);
    if (scheduledPayment != null)
    {
    amount = (Decimal)scheduledPayment.get('amount');
    }
    if (amount != null)
    {
    // further logic here
    }
    }


    The process will be quite similar for your other aggregations, with the exception of your Start_Date__c filtering, which will involve a bit more thought. You may want to look at pre-computing that value as a rollup via trigger.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Dec 6 at 22:21









    Adrian Larson

    104k19111234




    104k19111234












    • this is a good idea but I am concerned that if I bring in 200 records, then I can only have one query in the entire method. Would a better route be to write some handling with getQueries() and if we are at 199 then I would add the rest of the record Ids to a list, exit the method, and some how run the process again on the list? I am not entirely sure I will execute this, but something along those lines.
      – Olivia
      Dec 6 at 23:07










    • You're over thinking it. For the queries where you filter on Id, you can run one query to get all of the data.
      – Adrian Larson
      Dec 6 at 23:23










    • I am not entirely sure how I would achieve that with my query on advpm__Expense__c and Recovery_Payment__c as I am not just querying on ID.
      – Olivia
      Dec 6 at 23:29










    • @Olivia You can group by and filter by multiple filters. AggregateResult example, as shown by Adrian here, means you can take the queries out of the loops you have. Also, you need an update map so you can associate the values with the records correctly. More maps, less lists.
      – sfdcfox
      Dec 7 at 2:40










    • sorry I am still not understanding how this is possible. Not all of the objects have a direct relationship. Are you suggesting I do three nested maps to be able to gather bulk data for the four objects? Even if I Grouped By and Filtered, some of the objects have nested relationships that are reliant on the Payment Agreement object. I have updated my post with what I think you were saying but it (obviously) is written incorrectly
      – Olivia
      Dec 7 at 18:09




















    • this is a good idea but I am concerned that if I bring in 200 records, then I can only have one query in the entire method. Would a better route be to write some handling with getQueries() and if we are at 199 then I would add the rest of the record Ids to a list, exit the method, and some how run the process again on the list? I am not entirely sure I will execute this, but something along those lines.
      – Olivia
      Dec 6 at 23:07










    • You're over thinking it. For the queries where you filter on Id, you can run one query to get all of the data.
      – Adrian Larson
      Dec 6 at 23:23










    • I am not entirely sure how I would achieve that with my query on advpm__Expense__c and Recovery_Payment__c as I am not just querying on ID.
      – Olivia
      Dec 6 at 23:29










    • @Olivia You can group by and filter by multiple filters. AggregateResult example, as shown by Adrian here, means you can take the queries out of the loops you have. Also, you need an update map so you can associate the values with the records correctly. More maps, less lists.
      – sfdcfox
      Dec 7 at 2:40










    • sorry I am still not understanding how this is possible. Not all of the objects have a direct relationship. Are you suggesting I do three nested maps to be able to gather bulk data for the four objects? Even if I Grouped By and Filtered, some of the objects have nested relationships that are reliant on the Payment Agreement object. I have updated my post with what I think you were saying but it (obviously) is written incorrectly
      – Olivia
      Dec 7 at 18:09


















    this is a good idea but I am concerned that if I bring in 200 records, then I can only have one query in the entire method. Would a better route be to write some handling with getQueries() and if we are at 199 then I would add the rest of the record Ids to a list, exit the method, and some how run the process again on the list? I am not entirely sure I will execute this, but something along those lines.
    – Olivia
    Dec 6 at 23:07




    this is a good idea but I am concerned that if I bring in 200 records, then I can only have one query in the entire method. Would a better route be to write some handling with getQueries() and if we are at 199 then I would add the rest of the record Ids to a list, exit the method, and some how run the process again on the list? I am not entirely sure I will execute this, but something along those lines.
    – Olivia
    Dec 6 at 23:07












    You're over thinking it. For the queries where you filter on Id, you can run one query to get all of the data.
    – Adrian Larson
    Dec 6 at 23:23




    You're over thinking it. For the queries where you filter on Id, you can run one query to get all of the data.
    – Adrian Larson
    Dec 6 at 23:23












    I am not entirely sure how I would achieve that with my query on advpm__Expense__c and Recovery_Payment__c as I am not just querying on ID.
    – Olivia
    Dec 6 at 23:29




    I am not entirely sure how I would achieve that with my query on advpm__Expense__c and Recovery_Payment__c as I am not just querying on ID.
    – Olivia
    Dec 6 at 23:29












    @Olivia You can group by and filter by multiple filters. AggregateResult example, as shown by Adrian here, means you can take the queries out of the loops you have. Also, you need an update map so you can associate the values with the records correctly. More maps, less lists.
    – sfdcfox
    Dec 7 at 2:40




    @Olivia You can group by and filter by multiple filters. AggregateResult example, as shown by Adrian here, means you can take the queries out of the loops you have. Also, you need an update map so you can associate the values with the records correctly. More maps, less lists.
    – sfdcfox
    Dec 7 at 2:40












    sorry I am still not understanding how this is possible. Not all of the objects have a direct relationship. Are you suggesting I do three nested maps to be able to gather bulk data for the four objects? Even if I Grouped By and Filtered, some of the objects have nested relationships that are reliant on the Payment Agreement object. I have updated my post with what I think you were saying but it (obviously) is written incorrectly
    – Olivia
    Dec 7 at 18:09






    sorry I am still not understanding how this is possible. Not all of the objects have a direct relationship. Are you suggesting I do three nested maps to be able to gather bulk data for the four objects? Even if I Grouped By and Filtered, some of the objects have nested relationships that are reliant on the Payment Agreement object. I have updated my post with what I think you were saying but it (obviously) is written incorrectly
    – Olivia
    Dec 7 at 18:09














    up vote
    0
    down vote













    Rather then editing my code, I found that Salesforce recommends creating smaller batch calls when you run a Batchable class that will hit limits.




    An optional parameter scope. This parameter specifies the number of
    records to pass into the execute method. Use this parameter when you
    have many operations for each record being passed in and are running
    into governor limits. By limiting the number of records, you are
    limiting the operations per transaction. This value must be greater
    than zero. If the start method of the batch class returns a
    QueryLocator, the optional scope parameter of Database.executeBatch
    can have a maximum value of 2,000. If set to a higher value,
    Salesforce chunks the records returned by the QueryLocator into
    smaller batches of up to 2,000 records. If the start method of the
    batch class returns an iterable, the scope parameter value has no
    upper limit. However, if you use a high number, you can run into other
    limits.
    https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_batch_interface.htm




    Because I have three queries that need to be nested inside of a for loop, if there are 200 records the result will be a governor limit being hit. To resolve this I need to schedule each batch in increments of 66 records at a time.



    This is a really great resource for Aggregate queries: https://blog.jeffdouglas.com/2010/02/22/soql-how-i-query-with-thee-let-me-count-the-ways/






    share|improve this answer

























      up vote
      0
      down vote













      Rather then editing my code, I found that Salesforce recommends creating smaller batch calls when you run a Batchable class that will hit limits.




      An optional parameter scope. This parameter specifies the number of
      records to pass into the execute method. Use this parameter when you
      have many operations for each record being passed in and are running
      into governor limits. By limiting the number of records, you are
      limiting the operations per transaction. This value must be greater
      than zero. If the start method of the batch class returns a
      QueryLocator, the optional scope parameter of Database.executeBatch
      can have a maximum value of 2,000. If set to a higher value,
      Salesforce chunks the records returned by the QueryLocator into
      smaller batches of up to 2,000 records. If the start method of the
      batch class returns an iterable, the scope parameter value has no
      upper limit. However, if you use a high number, you can run into other
      limits.
      https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_batch_interface.htm




      Because I have three queries that need to be nested inside of a for loop, if there are 200 records the result will be a governor limit being hit. To resolve this I need to schedule each batch in increments of 66 records at a time.



      This is a really great resource for Aggregate queries: https://blog.jeffdouglas.com/2010/02/22/soql-how-i-query-with-thee-let-me-count-the-ways/






      share|improve this answer























        up vote
        0
        down vote










        up vote
        0
        down vote









        Rather then editing my code, I found that Salesforce recommends creating smaller batch calls when you run a Batchable class that will hit limits.




        An optional parameter scope. This parameter specifies the number of
        records to pass into the execute method. Use this parameter when you
        have many operations for each record being passed in and are running
        into governor limits. By limiting the number of records, you are
        limiting the operations per transaction. This value must be greater
        than zero. If the start method of the batch class returns a
        QueryLocator, the optional scope parameter of Database.executeBatch
        can have a maximum value of 2,000. If set to a higher value,
        Salesforce chunks the records returned by the QueryLocator into
        smaller batches of up to 2,000 records. If the start method of the
        batch class returns an iterable, the scope parameter value has no
        upper limit. However, if you use a high number, you can run into other
        limits.
        https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_batch_interface.htm




        Because I have three queries that need to be nested inside of a for loop, if there are 200 records the result will be a governor limit being hit. To resolve this I need to schedule each batch in increments of 66 records at a time.



        This is a really great resource for Aggregate queries: https://blog.jeffdouglas.com/2010/02/22/soql-how-i-query-with-thee-let-me-count-the-ways/






        share|improve this answer












        Rather then editing my code, I found that Salesforce recommends creating smaller batch calls when you run a Batchable class that will hit limits.




        An optional parameter scope. This parameter specifies the number of
        records to pass into the execute method. Use this parameter when you
        have many operations for each record being passed in and are running
        into governor limits. By limiting the number of records, you are
        limiting the operations per transaction. This value must be greater
        than zero. If the start method of the batch class returns a
        QueryLocator, the optional scope parameter of Database.executeBatch
        can have a maximum value of 2,000. If set to a higher value,
        Salesforce chunks the records returned by the QueryLocator into
        smaller batches of up to 2,000 records. If the start method of the
        batch class returns an iterable, the scope parameter value has no
        upper limit. However, if you use a high number, you can run into other
        limits.
        https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_batch_interface.htm




        Because I have three queries that need to be nested inside of a for loop, if there are 200 records the result will be a governor limit being hit. To resolve this I need to schedule each batch in increments of 66 records at a time.



        This is a really great resource for Aggregate queries: https://blog.jeffdouglas.com/2010/02/22/soql-how-i-query-with-thee-let-me-count-the-ways/







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Dec 7 at 19:46









        Olivia

        1,264421




        1,264421






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Salesforce Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsalesforce.stackexchange.com%2fquestions%2f241720%2fquery-limit-on-batchable-class%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            "Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

            Alcedinidae

            Origin of the phrase “under your belt”?